Going Live Too Early Can Be Worse as Going Late

A significant part of my engagements involves post-mortems on large, failed technology projects. And out of professional interest, I study public project failures and write case studies about them.

Based on my experience and studies, most of these disasters could have been prevented (or at least anticipated) based on the project’s available information.

While project teams are rarely surprised by the outcomes, sponsoring executives are often unaware of the impending tragedy until it unfolds — and sometimes at the expense of their careers.

One executive decision in particular can move your project from challenged to a full blown disaster. And that is going live too early with a business-critical system.

A well-known characteristic of large projects is that once “go-live” dates are planned and communicated it is hard to change them. There are multiple reasons for this. For example:

> Highly visible or public commitments may be difficult or embarrassing to come back from.

> Breaking contractual commitments may have significant financial consequences.

> Individual and company reputations may be damaged by not meeting a promised date.

> Significant costs may occur to continue the current solution beyond the cutover date. These costs can include renewing expensive leases, licenses, or maintenance for equipment, software, or real estate.

> People currently working on the project may be needed elsewhere, creating a resource problem that would impact operations and other projects if the current efforts were extended.

Most of you have experienced such timeline pressures; they are real, and I do not intend to underplay them. But, as real and daunting as these pressures can be, they have to be balanced with the consequences that a premature go-live can have.

You must consider both scenarios in your decision-making.

Estimating implementation timelines is an imprecise art, not a science. It is subject to error as well as risk and uncertainty. As your information systems become increasingly interconnected and sophisticated, coordinating changes takes time, patience, and hard work.

Systems often have substantial elements of organizational change that must be addressed before a go-live. If your new or changed system requires a meaningful change in how people do their work, significant time may be required to understand the implications of the change, modify processes, as well as user enablement through training and support. Changes in the processes may create downstream effects into other parts of your organization that must also be understood and dealt with.

Replacing all, or part of, an existing system can also be complicated by differences in how data is captured and stored between the old and new systems. The efforts and impact of stopping an existing system, converting and loading data, and restarting the new system are easy — and devastating — to underestimate.

Unfortunately, what happens often in large projects is that all these things are ignored as soon as the first milestones are missed. It suddenly is all about making the deadline, and not about doing the necessary work and mitigating these risks.

Below are a few prominent examples of what can happen if you go live too early:

> Case Study 13: Vodafone’s £59 Million Customer Relationship Disaster

> Case Study 9: The Payroll System That Cost Queensland Health AU$1.25 Billion

> Case Study 6: How Revlon Got Sued by Its Own Shareholders Because of a Failed Sap Implementation

> Case Study 3: How a Screwed-Up SAP Implementation Almost Brought Down National Grid

> Case Study 2: The Epic Meltdown of TSB Bank

So before you find yourself pushing relentlessly for the go-live date, I encourage you to consider the following eight questions about your team’s readiness for go-live.

1) Is your system ready?

Has it been thoroughly tested? Is there an honest assessment of the quantity and priority of its known defects? Have business users weighed in with an informed opinion about the acceptability or the feasibility of workarounds for the known issues?

2) Are your upstream and downstream systems ready?

Have interfaces been tested fully? Are instruments in place to monitor traffic and identify problems? What would be the business impact to your system and your surrounding systems if there is a significant disruption of data flow or a significant error in the data? What are the consequences of data delays? Can the teams operating your upstream and downstream systems support your cutover date?

3) Are your users ready?

Have the implications of changes been thoroughly analyzed and communicated? Are users ready for the changes to their processes? Does the organization understand the secondary effects of these changes on downstream processes? Are user staff sufficiently trained on the new system? Will there be enough staff available during cutover to deal with the learning curve of the new system and processes and work through unforeseen issues?

4) Is your technology infrastructure ready?

Are you confident that there is sufficient processing, network, and storage capacity to support the new system? Are your help desk and operations people trained and ready?

5) Is there a go-live playbook or schedule?

Did you provide details of the steps required to stop the current systems, migrate data, convert interfaces, and begin processing with the new system? Are roles and responsibilities during cutover clear? Are the timings of the cutover schedule reasonable? Has there been a successful dry run?

6) How and when will you know that the cutover was successful?

Success here would mean not just all aspects of the technology, but also the impacts on the processes and the people who interact with the system. Has monitoring been established to provide early warning of problems?

7) What is the impact of a failed implementation?

Have you considered the failure modes of the new system and its impact to your business and that of your partners? Are there feasible workaround processes in place should cutover disruption be extended?

8) Is there a feasible fallback plan?

Can the cutover be aborted if significant issues are discovered? How long can the new system be in place before the cost and complexity of backing out of the system will be preventing this? What is the “point of no return” on the implementation? Is the fallback plan as detailed as the go-live plan? Has it been tested?

Before succumbing to the inertia and inevitability of a committed date, an honest assessment must determine whether the system and organization in which they operate are ready for the transition.

So instead of focusing on the list of reasons why a system “must” go live on the selected date, have an honest conversation with executive stakeholders about the true status of the initiative and the potential consequences of a failed implementation.

In a nutshell: Going live on time should not be the default. Going live when the system and organization are demonstrably ready should be the goal.

Originally published at https://www.henricodolfing.com.

I help C-level executives in the financial service industry with interim management and recovering troubled technology projects.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store