Archive for the ‘Software’ Category

Tesla is having an interesting year with its Autopilot capabilities.   Its software, supported by cameras, radar and ultrasonic sensors, can automatically drive on a highway including changing lanes and adjusting speed depending on surrounding traffic. It can find a parking space and parallel park itself. It even comes when called. The Summon feature allows you to call your car from your phone and have it meet you at your front door. How many times have you wished for that feature when you can’t quite remember exactly where you parked in that huge shopping center lot on a rainy cold day?

Joshua Neally is quite happy with his Model X. On his way home in Springfield, Missouri, he suddenly felt something like “a steel pole through my chest.” He let his Tesla autonomously take him more than 20 miles to the off-ramp near the closest hospital. Neally had to drive himself the final stretch, but he survived a pulmonary embiolism, which kills 50,000 people a year, 70% in the first hour.

On the other hand, Joshua Brown had a different experience with his Tesla Model S in Williston, Florida. His car’s sensor system failed to distinguish a large white 18-wheeler crossing the highway on a bright sunny day. Joshua was killed when the top of his car was torn off when it went through the trailer.

Tesla cars are not fully autonomous: they require the driver to remain alert and keep their hands on the steering wheel. Neally’s car got him safely 20 miles and off the freeway, but he had to manually drive to the emergency entrance. Brown was apparently watching a Harry Potter movie at the time of the crash; at least the movie was still playing when the car finally stopped after snapping a telephone pole a quarter mile from the accident.

Uber in Pittsburgh is offering rides in self-driving Ford Fusion cars. There is still a driver who has to be ready to take over at any time. The current software will not automatically change lanes, like when a delivery van is double-parked on a city street. The driver needs to take control to safely go around the obstacle.

Like today’s cruise control, you must engage the self-driving features, and manually braking or accelerating disables the self-driving features.

Surprisingly, bridges can be a problem for autonomous vehicles. One might think that nothing could be simpler than a bridge: a straight set of well marked lanes with a definite right side and minimal distractions like pedestrians and cross streets. But that simplicity is a big part of the problem. Without so many environmental clues like buildings, it is harder for the car to figure out exactly where it is. This is one reason Pittsburgh was chosen for this first Uber self-driving car rollout. Pittsburgh also has four seasons, an irregular grid of roads, and lots of potholes, and Carnegie Mellon University robotics center. The robotics center providing the self-driving hardware and software.

We are in the awkward learning phase with autonomous vehicles. Having someone behind the wheel to take over is problematic. It is hard enough to stay focused on driving when you are actually driving, let alone when you have very little to do.

The last word:

If you have one of the current crop of semi-autonomous vehicles, you must pay attention at all times. Tesla emits an audible chime when it detects that the driver does not have his hands on the wheel. Uber currently has a engineer behind the wheel, and another ride-along engineer in the passenger seat monitoring the car and taking notes.

If you are riding in a semi-autonomous vehicle, like the Uber cars in Pittsburgh, do not distract the driver. Treat the driver like you would in a normal vehicle.

On the other hand, Teslas had been driven more than 130 million miles in autopilot mode prior to its first fatality, compared to one fatality every 94 million miles in the US and every 60 million miles worldwide.

Always remember, these software systems are still in beta. You won’t trust your business to beta software; don’t trust your life to beta software!

Comments solicited.

Keep your sense of humor.


Read Full Post »

WS2003Windows Server 2003 (WS2003) was first released in, surprise, 2003. It replaced Windows Server 2000. Microsoft has released several derivatives including Windows Compute Cluster Server 2003, Windows Storage Server 2003, Windows Small Business Server 2003, Windows Home Server, and Windows Server 2003 for Embedded Systems.

WS2003 mainstream support ended in July 2010. On July 14, 2015, Microsoft will officially end extended support for WS2003. Microsoft will not release any updates, including security updates or patches, after this date.  At that point you can pay Microsoft for security fixes for WS2003, but it is very expensive and not delivered promptly. Most antivirus solutions will not be supported on WS2003 after 7/14/2015 meaning that there will be no signature updates for new vulnerabilities. Considering the rate at which new malware opportunities are discovered in all flavors of Windows platforms, any WS2003 systems you have in production will quickly become vulnerable. As one data point, there were 37 critical updates for WS2003 in 2013, 10 years after the product’s release. WS2003 will not pass any further security or compliance audits. Expect stiffer fines and other penalties if you experience a data breach where a WS2003 system is part of the application environment.

This should not be a surprise. Microsoft has published its support policy and product end of life chart on its web site for over ten years. There are a lot of servers still running WS2003 out there. A Microsoft survey in January 2014 showed about 22 million WS2003 systems in use. A large number of those are in small and medium sized businesses. Many of these SMB companies do not have large IT staffs or budget to make any kind of a migration.   There are probably at least 10 million WS2003 systems still in use today. Even many Fortune 500 companies are still dependent on WS2003, and most will not have migrated by the deadline, especially as it seems to take about six months to make the migration off WS2003.

Microsoft introduced Windows Server 2008 in 2008 as the successor product to WS2003. However, Windows Server 2008 is not the best destination for your WS2003 systems. Microsoft will end mainstream support for Windows Server 2008 on the same day that it ends all support for Windows Server 2003, July 14, 2015, while extended support ends in January 2020. If you need to move off Windows Server 2003 in any of its flavors, you are better served to jump to Windows Server 12. Windows Server 12 was generally available in September 2012 and released R2 in October 2013. Mainstream support for Windows Server 12 is scheduled to run until January 2018.

Microsoft provides assistance. Perhaps as an indication of their sense of urgency, the first thing you see on that Microsoft page is a count down clock telling you, down to the second, how long you have. Microsoft is, not surprisingly, pushing migration of your WS2003 servers to the cloud powered by Microsoft Azure. In some cases, that may make sense, but only if you want to make a significant change in your operations and procedures. Moving to the Cloud should be a business decision, not a technology decision. Like a lot of things involving cloud computing, the end point is often a better place to be, but getting there under a deadline can be risky. You should at least look at the material Microsoft provides to help in discovering which of your applications and workloads are running on WS2003, assess those applications and workloads by type, importance, and complexity, and choose a migration destination for each. For some of those workloads and applications, moving them to the Cloud may be the easier and less risky solution.

Your IT department probably has some good reasons for not migrating:

  • Your current server hardware may not support Windows Server 12.
  • Some of your mission-critical applications may not be supported on Windows Server 12.
  • You do not have sufficient financial or IT resources to make the migration while simultaneously keeping your IT environment running.
  • Unfamiliarity with Windows Server 2012.

The second may be the most serious, and may take the longest to fix. In the worst case, you may need to migrate to a different application.

In the meantime you may be able to mitigate some of the risk by restricting access to your WS2003 servers. Products like the Unisys Stealth Solution may help. It can completely isolate your WS2003 systems from the outside world, allowing communication only from the specific systems and users you permit. Since the protection is based on user identity, not specific network location or device identity, the rights of an individual change automatically when their role changes. As Unisys says, “You can’t hack what you can’t see.”

If you do not have the resources, get help. There are many companies out there with experience in migrating off WS2003. You do not have to go it alone.

The last word:

Windows Server 2003 is potentially as serious a security problem as Windows XP. Hopefully you are well past getting rid of that OS from your entire IT environment as have all of your business partners who share any proprietary, financial or customer protected data.

If you are running Windows Server 2008 you should start planning to move them to Windows server 12.

The keys to a successful operating system migration are planning and testing. These exercises can feel like a huge drain on your resources, and each migration can itself cause new problems. But you have to do it; you cannot afford to be vulnerable.

Comments solicited.

Keep your sense of humor.


Read Full Post »

HeartBleedI suspect you have heard of the Heartbleed (or “Heart Bleed”) bug on “the entire Internet” along with predictions of doom for all of us who use the Internet. Heartbleed is an indication that each new crop of programmers apparently have to make the same set of mistakes instead of learning from the past 60 years of the programming art, and an indication that many companies, especially the new social media companies, are fixed on rushing into new project and really do not care about the security of their customers’ data.

Heartbleed is a bug, not an engineered virus or some other form of malware. It is a good example of very bad programming. This was not caused by malevolent people or governments, but by incompetence. The malevolent people, however, were eager to take advantage of it.

Decades ago I went to UC Berkeley to work on a Ph.D. in Computer Science. Most students in the program had gone right from high school to Berkeley and worked straight through their BS, MS and were now working on their doctorate. I was the exception: I had spent the prior half dozen years actually working in the real world. One young man in particular was absolutely brilliant – straight 4.0 average from his freshman year, and excellent at translating a real problem into a software solution. He came to me one day with his latest triumph: the user interface module to a new operating system. He asked me to type in “TI”. I typed in “3#”. The whole OS crashed. He got a long hang-dog face and asked, “Why did you do that?” My answer was that I was a dumb user and could not be trusted.

The Heartbleed bug is exactly that. The program fails to check the data received, and will willingly send thousands or more characters of information currently in the computer’s memory. The particular transaction is called a “heartbeat” – a simple way for one computer to determine if another is still alive and connected. This transaction is usually associated with servers, computers that deal with lots of transactions from many different users, including sign-on’s. Thus the information that the bug sends can contain user names and passwords, bank account numbers, or anything.

In good practice, this kind of bug gets caught in design reviews or code reviews. A bunch of programmers get together and inspect what someone is planning on doing or the code they have written. The guilty programmer gets embarrassed, learns a lesson, and fixes the problem. No harm done. Obviously, those reviews never happened.

Instead, the faulty code was released as a beta version of OpenSSL. SSL is the security code used to establish secure connections between your computer and some other computer on the Internet. You know you are secure if you see https:// in front of the URL or you see the little padlock in front of the URL. SSL has been around since 1996 and is the backbone for secure Internet connections. OpenSSL is an open source implementation, meaning that companies can use the product without paying a license fee. A beta release usually means that the software is feature complete, but may not be bug free. The basic rule of a beta release is do not put it into production. It may have bugs, it may be unstable, or it may not work under some conditions.

In this case, many companies took that beta release of OpenSSL and put it into production, thus exposing their customers to a huge security problem. In my view, any company that put this beta release of OpenSSL into production was grossly negligent and should be responsible for the financial results of such an irresponsible action.

If a company did not install the buggy beta, then their customers were not exposed. If a company installed the buggy beta, and then later fixed the problem, they are now secure. But if you connected to their server while the buggy beta was installed, your data was in danger. Unlike many malware attacks, it is virtually impossible to determine what data was sent out and therefore which customers were potentially compromised. If you connected to a server running this beta version of OpenSSL you should assume you were compromised.

There are several lists of which companies potentially compromised your data. Some popular companies that installed the buggy beta and might therefore have exposed your information:

  • Amazon Web Services
  • Facebook
  • GoDaddy
  • Google (Gmail, YouTube, Wallet, Play, and potentially Google Plus, Google docs and Google-hosted web sites were all at risk for some period at time)
  • Healthcare.gov (surprise!)
  • Instagram
  • Pinterest
  • Tumblr
  • Twitter
  • WordPress (as a blogger, not a reader)
  • Yahoo & Yahoo email

A few of the many companies who did not install the buggy beta, and therefore did not expose you to this danger:

  • Amazon.com (the “buy something” site)
  • AOL email
  • Apple
  • eBay
  • Hotmail
  • Intuit (including TurboTax)
  • IRS (and as far as I can tell all other US government sites, except Healthcare.gov)
  • LinkedIn
  • Microsoft
  • PayPal
  • Target
  • Walmart
  • Almost all financial companies (American Express, Bank of America, Chase, E*Trade, Fidelity, and many more)

If you really want to understand how the Heartbleed bug works, check out this YouTube.

The last word:

What should you do?

  1. Check the companies you sign-in to. If they say they “fixed the problem” they should go on your danger list. Only if they never installed the buggy beta are you safe with them.
  2. If you have sign-ins to any of the companies that might have compromised your data, change your password immediately.
  3. If you use the same password at other sites, change it there also. Once a cyber-criminal gets one of your passwords, he is likely to try that password at other sites.
  4. Continue to monitor your financial status.

Check out your own company: did you expose any of your customer’s data to this bug? Ask your IT director or CIO if the buggy beta was ever placed in production. If so, I would suggest firing the responsible manager and making sure your IT group does not ever put beta versions of anything into production. Also check out any partners, including your Cloud Service Provider, to make sure that they did not expose your customers to harm.

Then proactively apologize to your customers, indicating the date range when your company exposed them and strongly recommend they immediately change their passwords to your site. Taking the initiative here will, at least, earn good will.

Comments solicited.

Keep your sense of humor.



Read Full Post »

Well, not really.

Microsoft released Windows XP in August of 2001 as its personal computer operating system within the Windows NT family of operating systems. As late as October 2010, you could still buy a new PC with Windows XP installed. Windows XP was the most widely used operating system until August 2012, when Windows 7 overtook it. Mainstream support ended in April of 2009, and extended support ended on April 8, 2014. The last stable release was in April 2008. Going forward, it will cost customers on the order of $200 per PC per year to maintain security patches.

There have been a lot of security patches for Windows XP. The phrase “Patch Tuesday” refers to the second Tuesday of each month in North America, when Microsoft issues its planned security updates. Of course, sometimes there is an extraordinary Patch Tuesday fourteen days after a scheduled Patch Tuesday, plus critical patches in between. There were 99 security flaws detected in Windows XP in 2013. I would not expect the need for frequent security patches for Windows XP to diminish at all.

PCMarketShareBut Windows XP is not dead, and is not going to disappear for a long time. As of March 2014, NetMarketShare reported that Windows XP still had over one quarter of the desktop operating system market share. That represents on the order of 500 million computers worldwide.

The simple solution is for people to move off Windows XP. For some people, that is conceptually easy:

  1. Buy a new PC (your old one probably won’t support Windows 7 or Windows 8).
  2. Move all of your applications and files to the new PC.
  3. Deal with the applications that need updating, including Microsoft Office, or find replacements for those who just won’t run at all on the new OS.

There are a lot of proprietary software products that just will not run on anything but Windows XP. Some amusing examples:

  • 95% of all ATMs in the world. Yes, the bank teller machine that gives you money is probably running Windows XP, and probably communicates back to the branch or service provider over the Internet.
  • Cash registers and other POS (point of sale) devices, many of which are using wireless technology to communicate back to the server.
  • SCADA systems (supervisory control and data acquisition). These systems allow remote control and data acquisition for environments like manufacturing, power generation and distribution, and water and sewer management including dams. Again, they are often running over the Internet.
  • Industrial robots
  • Slot machines.

These systems are all prime targets of cyber-criminals and cyber-terrorists.

There actually is a solution for many XP environments: the Unisys Stealth Solution. It can completely isolate your XP systems from the outside world, allowing communication only from the specific systems and users you permit. Since the protection is based on user identity, not specific network location or device identity, the rights of an individual change automatically when their role changes. As Unisys says, “You can’t hack what you can’t see.”

The last word:

This is serious. If your company is relying on Windows XP you are much more exposed to attack now then you were last month. What is worse, your exposure is not just from your own systems, but the systems used by your partners and employees when they work remotely. Have you asked your CIO how many XP systems are still in use in your extended IT infrastructure? Has your CIO checked with your partners? What about your POS equipment or the factory robots used to make the products you sell?

Your first step should be to determine what you really have, and make a plan to fix the problems. It may literally take years to eliminate all of the issues, so you will need to have mitigation efforts in place in the meantime.

You should have done this at least five years ago, but if you didn’t, now is the time to act.

Comments solicited.

Keep your sense of humor.



Read Full Post »

The technology of a hospital room has sure changed.  Through most of history, a hospital was more a place to isolate patients, and consisted mainly of matresses on the floor.  In the last four or five hundred years it advanced to wards full of standard single beds with a staff of nurses and doctors that wandered around and did what they could.  It wasn’t until the early 1800s that side rails appeared on beds, and the modern three-segment hospital bed shows up in the early 1900s.  In 1945, Dr. Marvel Darlinton Beem invented the first “modern” hospital bed controlled by patient-accessible buttons.

If you have recently been in a modern hospital, you have seen the monitoring devices for blood pressure, temperature, blood oxygen content, pulse rate, breath rate and EKG that not only display their information in the room but also at a nurse’s station.  Nurses and doctors use laptops on rolling stands to record medicines given, orders, and other information.  All of this data plus the results of X-Ray, MRI, CAT, PET, … scans can be instantly shared with doctors located almost anywhere.

"Modern" Call Button

Yet in the midst of all this technology, there is one thing that is still back in the middle of the 20th century: the nurse call button.  In some places, it is literally a string hanging down the wall, in many a simple button.  I recently visited a relative in a brand-new wing of a very up-to-date hospital.  It was a fabulous place, with only single patient rooms, each with a sofa-bed for relatives to use, and with all of the modern medical equipment they could use.  Even a heater for the blankets when they had to take a patient out of the room for a test.  They had a very nice control for the TV, with a button to call the nurse.  There was also a call button on each side of the bed that the patient could reach.  If you press the call button, it turns on a light over the room door, and flashes light on the nurse monitor station.  That’s all: just an indication that the patient pushed the button.

In every one of these situations, the caregiver has no idea what the problem is.  The patient could be having trouble breathing, or in great pain, want a glass of water, or just want to know when lunch is coming.  Laws in some jurisdictions and hospital procedures often set the maximum time in which a caregiver must respond to a call.  However, those rules do not take into account the type of problem.  Clearly it is more urgent to respond to chest pains than a glass of water.

So the caregiver stops what he is doing, goes to the room (possibly having to put on a gown, mask or gloves depending on the patient’s situation), ascertains what the problem is and its urgency, and then handles it.  Handling it may require leaving the room, getting something, and returning.  If necessary, that means putting on another set of gown, mask or gloves.

The situation becomes even more difficult if the patient cannot verbally describe what she wants.  This could be because of a stroke that has impacted her speech, the insertion of a breathing tube or mask, or simply because the patient and the caregiver do not share a language.

Most of us have been in the situation where we could not communicate very well with someone.  Maybe on a business or pleasure trip to a foreign country, you had trouble deciphering the menu, asking directions and understanding the answer, or telling the taxi driver where you wanted to go  (a problem I’ve had in both Mexico City and New York City).  None of these were really serious issues, but image if you were sick or injured: could you have explained your problem to someone?  Some of us have had relatives or friends who had a stroke, and observed the frustration that a very intelligent person has in communicating because of difficulty in finding the right words and getting their mouth to say them intelligibly.  It is a mind imprisoned in its own body.

Imagine also the frustration of the caregiver, where every visit in response to a call button push may be trivial, or an emergency.  Or when the caregiver goes to a patient, then has to go out and get something that the caregiver could have taken to the room the first time if only he knew what the problem is.  Or when the caregiver can’t understand what the patient wants, other than he knows the patient is in distress.

One of the companies I am working with has developed such a device that uses, for example, an iPad as the patient’s device.  The iPad continues to have all of its normal functionality, but the caregiver communication app has a set of buttons, most of which link to another screen with additional options.  So the patient can indicate that she wants a glass of water, or a cup of hot tea; that she needs to go to the bathroom; that she is in pain at what level and where; that she wants to talk to her son, or her priest, or her friend.  In addition, she can send a text message to the caregiver, and can receive a text message back from the caregiver.  So, “when is my next trip to the MRI?” can be answered with “2PM” without the caregiver needing to actually visit the room.

These messages are all sent to a window at the nurse’s station, and they can be texted to the caregiver’s cell phone so while doing rounds the caregiver can instantly know of another patient’s request and appropriately prioritize the response.  The app can also be configured to send the messages to the cell phone of a friend or relative of the patient.

The app also can be easily configured to display another language.  In that case, the app displays all of the button descriptions and other text in the patient’s language, yet the messages received by the caregiver are in his native language.  The app makes a reasonably good translation even of text messages between the caregiver and patient.

This obviously isn’t the solution in all cases.  It will not be suitable for the very young, or those with very limited arm and hand movement, or those that due to medical condition or medicine effects may not be able to understand or use the app.

The nurse’s station application also keeps track of response times and who responded, allowing the hospital nursing staff to monitor performance towards their response-time goals.  It also, therefore, provides a record to show how quickly a caregiver actually did respond in case of a “I pushed the button and nobody came for 20 minutes” claim.  A large hospital can configure an administrative server that will monitor all calls from all rooms in real time, allowing supervisors to appropriately spread their limited staff to the wards with the most activity.

The last word:

Being in the hospital is no joyous occasion, except maybe in the maternity ward.  With the big push on improving health care while reducing its cost, it would seem that a relatively inexpensive device that would reduce patient and caregiver frustration, improve the ability to react to patient requests and reduce the workload on caregivers would be of great interest.  You can view the iPad app description and the Press Release.

Comments solicited.

Keep your sense of humor.


Read Full Post »

If you run a business or some division of a larger one, some of your revenue probably comes from responding to a request for proposals, or RFP.  Many large organizations and almost all government entities use RFPs for most or all of their procurements.  It, in theory, brings structure to the procurement process, defines the decision methodology, and gives a perception of fairness to the murky business of buying product or services.  In theory, the buyer advertises the RFP, allows for a period of questions, has a specified date and often format of the response, and a documented decision process and timeline.

Preparing an RFP response is like a term paper in high school or college.  You spend a lot of time creating it, and there isn’t much good that can come from it.  Every hour you spend working on the response is an hour you aren’t earning any money.  So how do you decide whether to bid or not?

There are some obvious questions, like does the organization have budget to actually award the bid and do you really want to do the job and be associated with the procuring organization.  But to me the most important question is how did you get the RFP?   Did you find the RFP yourself on some website or through some service, or did the client send it to you or at least ask you to bid on it?

Years ago I was asked to set up a technical pre-sales organization for a major division of a well-known computer company, as we called them in those days.  My real task was to figure out why we were only winning about 1 in 10 RFP responses we submitted.  After one year, we were winning about 1 in 3.  In this specific environment, it cost about $1M to put together a proposal, but the awards were eight-to-ten digit numbers, so this improvement had a noticeable benefit to the bottom line.  We made a number of changes, like only bidding on deals that we could actually deliver.  But the most important change was that if we weren’t in there influencing the deal before the RFP came out, we did not bid on it.

Being there allows you to get answers to many of the basic questions before you decide to bid: do they have budget, what is the decision process, who makes the decision and who influences that decision, is there an incumbent they like or pre-selected winner that will be hard to overcome, and, most importantly, what is the real problem they are trying to solve.

Too many RFPs are written by desperate people who don’t understand their own problem.  They have somehow determined the solution, and put out an RFP to get that solution.  If you really understand the underlying issues, you can put in a very strong proposal that indicates you do understand and have the ability and desire to fix it.

Most RFPs are written by people who have never written an RFP before.  This is true even in large companies with a separate procurement organization.  The procurement folk may manage the process and have templates, but they don’t control the technical content.  And those people have rarely had much experience writing an RFP.  When you see something strange, question it.  You probably have a better idea of what the client really needs then they do, especially if you have been working with them for some time.  And it can be a key part of gaining the attention of the influencers and decision makers.

I suggest that you treat a proposal, and the pre-sales effort leading up to it, as a project with a budget.  As you start working on it, estimate the potential total value of the award and determine how much you are willing to spend on getting it.  This might be a percentage of the expected client payments over two years.  I wouldn’t go above 10%.  Keep track of your time and other expenses just like you would for a “real” project.   Check on the status periodically, and be prepared to walk away.  I’m working on a deal right now that was originally supposed to close in June, now maybe in December.  I’m still pursuing it, but with a lot less time and effort.  Two things I carefully watch:

  • If the award date is moving away faster than the calendar is moving forward, it will never close.
  • If the expected value of the award is dropping, so should your proposal effort budget.

Don’t make the common mistake of saying “gee, I’ve already spent 100 hours working on it, maybe another 50 will close it.”  If it isn’t profitable at 100 hours, it isn’t going to be profitable at 150.  Take that 50 hours and find another deal to work on.

If you get good enough to win one in three proposals, then spending 10% of the expected value on each proposal means you are spending about 30% of your revenue on sales, plus whatever else you are spending on your marketing efforts including demand generation.  If you are only winning one in ten but still spending 10% on the pre-sales for each, then it is easy to figure out why you aren’t making any money.

One more point: treat your proposals as a marketing document that can be reused many times.  The format of each submission may be significantly different, and some will require deeper detail in some areas.  After you have submitted six or so proposals, 85% of the actual content should be the same. After all, that’s why “cooy” and “paste” were invented.  If you are writing significant new material for every proposal, maybe you haven’t found your niche market or the right potential clients.

The last word:
This is just one of many issues you need to confront as you enter the realm of consulting.  Peter Osborne and others have set up a Consultants Launch Pad.  The goal is to help people decide whether consulting is right for them, and then show them how to quickly set up their businesses and become successful.  It includes experienced consultants who offer advice.  Check it out.

Comments solicited.

Keep your sense of humor.


Read Full Post »

Don’t mess up the installation and upgrade processes for your software products.  They have the opportunity to negatively impact customer satisfaction, revenue, and profit.  Like dusting, nobody notices when you do it right, but it is obvious when you don’t.  These two aspects are actually different sides of the same problem: getting your software product into production for a customer and allowing you to get paid.

Don’t miss the opportunity to make a great first impression.  That first time your new customer really sees your product is when he tries to install it.  We have all had the experience of installing something that was really difficult, with lots of manual steps, poor installation documentation, and lots of “I wonder what I should do next” stop points.  Or you call the help desk and get someone as confused as you are.  Even if the help desk people are competent and help you on the first call, the product now is branded in your mind as complicated and hard to use.  Bad first impressions are hard to overcome.

When a new version for your product comes out, how painless is it to get it installed?  How much disruption to your customer’s business does it cause, and how much effort is required to get it right?  If your customer perceives that it is harder to install your upgrade than to move to your competitor, what do you think he will do?  It is much better to never make the customer even ask the question.

One place I worked had a fairly common CRM (customer-relationship management) package from a major vendor.  In order to get some problems fixed and take advantages of some new features, we needed to upgrade from version n to version n+1.  The vendor said it would take two days, so it was scheduled over a weekend.   It took more than seven days and involved a lot of work from both the vendor and us.  That was a week where we couldn’t take orders, send invoices, or even properly handle support calls without a lot of manual effort.  That vendor doesn’t supply the CRM system there anymore.

Over the years I have been thrown into situations with existing product sets that had serious customer satisfaction problems.  It is nice in one way: it wasn’t my fault things were the way they were, and since I was the new kid on the block I could always ask the stupid questions like “why is that so hard?”  However, the issues were usually engrained as much in the culture as the technology, and culture can be interesting to fix.

In one case I started by looking at the support call log for the past six months.  Almost 80% of the calls were about installation or upgrades.  Some of the calls were open for weeks until they were resolved.  The support team was top notch, but the software was just very complicated to install, and it had to be manually installed at lots of places.  These calls had several suboptimal results: support calls cost real dollars to respond to, the customer is not going to pay his invoice until the product actually works which delays revenue, and the customer isn’t real happy with your product.  The latter persists and gives your competition an easy way to throw FUD (fear, uncertainty, and doubt) the next time the customer needs more product.  I took a small set of the best engineers and made it their problem.  They started by sitting at the support desk taking calls, calling customers who had reported serious installation or upgrade issues, and listening to what the customer wanted.  Now, as is often the case, what the customer wanted was that all you had to do was get the installation CD close to the system and everything would happen automatically.

The team came up with a totally different way of looking at the problem.  A couple of months later they had an installation process that we could use across the product line that asked all of the necessary questions up front, then automatically did everything everywhere.  If it was an upgrade installation, then it didn’t even ask the questions, it figured the answers out from the existing environment.  The net result was that six months later our support calls for installations and upgrades were only about 10% of our calls, and were almost always resolved in the first call.  I and the CEO started get emails from customers who wanted to tell us how great our software was.  Even better, we entirely replaced our biggest competitor at a large account simply because we could get a new project up and running in one day instead of the three days the incumbent vendor claimed was required.

In another case we had a product that was fairly critical to a customer’s operation.  The primary engineer on the product was much more interested in adding neat new features than in making the upgrade painless.  I sent the engineer to visit the customer for a few days and see how the customer was actually using the product, and to directly see how lost time for an upgrade impacted the customer.  When he got back, the engineer disappeared into his office for a couple of weeks, then demoed his new upgrade mechanism to us.  I sent him back to the customer’s site to show the customer and get his feedback.  The customer was ecstatic – his upgrade downtime went from many hours to less than a minute, and the manual input required went to virtually nothing, eliminating the human error that had sometimes forced the customer to re-install the upgrade several times to get it right.

The best part of both of these examples is that there was very little change to the product – it was almost all in the installation.

In most software development organizations, the developers view installation as unimportant and uninteresting.  No one wants to work on it.  It is often the last thing done, and is done quickly without much thought and without much testing.  The first thing that needs to happen is to change that culture.  Engineers usually take great pride in their work.  I have found that if you actually talk to them about the importance of installation and upgrades to their customers and the company’s revenue, you can usually get them to change their view.

When should you start worrying about these issues?  You can maybe get by a couple of early adopters with a labor-intensive installation process, and maybe you can get that first customer to the first real production release by going in yourself and doing all of the work involved.  But after that, the product must be easy to install and upgrades must be trivial to do and have the absolute minimum impact on the customer.

While every situation is a different, there are a few guidelines to consider:

  • Only ask a question once.
  • Ask all of the questions up front, especially if the installation / upgrade process will take more than a few seconds.
  • Verify, to the extent possible, all installation parameters at the time they are provided to the process.
  • If something goes wrong during the installation process, make sure you give the customer enough information to determine the exact nature of the problem and to give your help desk folk a good chance of solving the problem on that important first call.
  • Allow the user or administrator to make a reasonable change to any configuration item at any time without a re-install, and “remember” the change.
  • For upgrades, pick up the current installation parameters.  In general, there should be no need to ask questions during an upgrade.
  • When replacing a competitor’s product, give the customer the opportunity to pick up configuration information from your competitor’s product.  Even better, have your installation look for your competitor’s product and give the customer a single click option to configure your product the same way.

Talk to your customers about their issues with upgrades and installs.  Make sure your qualification tests include significant testing of the process, especially upgrades.

Make sure your customer doesn’t have to upgrade everything at the same time.  If you have different components that may be scattered in the customer’s environment, make sure that different versions can interoperate to some degree, and publish those rules.  If components at version n+2 can operate with components at version n and n+1, that’s great.  If you are limited the compatibility to only version n+ 1 and version n, then publish that.  Of course, some features may not be enabled when all components aren’t at the same level that supports those features case.

The most critical place to have an effortless install is for your demo or trial package.  If that doesn’t install obviously and quickly with no problems, you don’t even have a chance for a sale..

The last word:
Where do the most experienced aircraft engineers work?  On the landing gear.  It’s hard to get right and you have to get it right. Bad things happen when they fail, and there is no opportunity to try it again.  Who should design and develop your installation and upgrade processes?  Your most experienced designers and developers.  If you don’t have top notch installation experts on staff, go rent some.  Once you get it right, it is fairly easy to keep it that way.

Comments solicited.

Keep your sense of humor.


Read Full Post »