When stability is everything

When you write code, make it future-proof.

In many industries, the stability and consistency of a system are critical to business success. Yet, we require changes to be made without affecting stability.

As a software developer, one is able to make strategic choices to optimise a solution for stability. Versioning and rollback strategies, unit tests and simplifying code can all assist with confidence in the solidity of the solution. 

Unit tests and automation tests

Unit tests essentially test certain scenarios to confirm that the code will react in the desired way. If the unit test breaks, it means that the changes should be relooked, or the unit test changed to fit the new business rule.  

When a developer adds unit and integration tests, it will empower him to know when changing something in a solution will break the existing business rules.

If the resources allow, an automation tester can add tests on the user interfaces. This is a fail-safe for unit tests, as often unit tests do not cover user interactions.

If unit tests are combined with automation tests, this will empower software developers and business owners to become more confident that deploying code will have the desired outcome.

Deployments and dev-ops

A $460 million dev-ops disaster

Knightsbridge, a forex trading company didn’t have a proper deployment strategy in place. One evening, they deployed code to their servers. Having multiple servers to deploy to, they forgot to switch a flag off on one of the servers. Here’s a quote from the US Securities and Exchange Commission filing:

“During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.
SEC Filing | Release No. 70694 | October 16, 2013

Knight Capital Group realized a $460 million loss in 45-minutes.


It makes sense that the deployment process needs to be automated. For this reason, software such as Octopus deploy and continuous integration (CI) solutions such as TeamCity, Go Pipelines and Jenkins , that do automatic builds, run unit tests, etc.

It is also advisable to have one-click deploys, implemented with a blue-green strategy, where the solution will only deploy if all steps were completed successfully.

Versioning and deployment

Many managers who have to deal with unstable code believe that not upgrading or deploying new code is the answer to keeping the status quo. This is, however, not possible in the fast-paced digital businesses of today.

It is therefore advisable when deploying new changes to a codebase that is unstable, large and/or complex, one needs to know exactly what will be going live. For this reason, it’s important to have a proper deployment process in place, including source control logs of all changes.

Having all this information available will not only give a  developer the ability to find bugs faster (as he knows the changes that happened) but also give the product owner peace of mind about what is currently in the production system. 


If you write less code, then there is less code that can break

Making the codebase smaller by deleting code that is unused, deprecated and/or duplicated can lower the risk of issues sneaking in. For example, the IDE’s IntelliSense might tell you that certain methods are available, while they are not in use.

When writing code, it is advisable that best practices need to be followed with regards to naming conventions and code structures. It would not only be easier to debug, but any changes to the code can be made with clarity, rather than guesswork. 

Developers are able to simplify code by constructively refactoring like refactoring large methods into classes with smaller methods.

Should I upgrade/update often?

I once worked on a solution that was running a version of SQLServer that was 20 years old. We were explicitly told that the solution was a critical business system. 

Not only was this run on a PC where the operating system had no more support updates, but it was also itself not upgraded. 

Concerning upgrading a system, the options and impact are as follows:

  • Never upgrade
    • this will cause certain devices and browsers not to display the app properly
    • The risk of hacking and errors on the software, operating system and infrastructure is greatly increased.
    • The code might be stable for the time being but will require a total rewrite when the existing system cannot work as expected anymore
  • Upgrade major versions
    • Code without unit test coverage might be unstable with the next release. 
    • The code lifetime is lengthened
    • More work will be required to make the major upgrades due to deprecated functions  
  • Upgrade on a regular interval (6 monthly or annually)
    • Code without unit test coverage might be unstable with the next release, but the impact will be smaller than upgrades with major releases. 
    • The code lifetime is lengthened
    • The changes are broken down into smaller, bitesize chunks


Software stability in some scenarios are vital to some business. For these businesses, it makes sense that they have the proper infrastructure in place to cut out any human error.

Using proper unit, integration and automation tests could prevent complex business rules turning into a crisis.

Though one might consider never upgrading a system, it needs to be understood that this might be delaying the inevitable – rewriting and/or refactoring code.

When stability is important, make sure proper due diligence is done and processes are in place to stop rogue code being promoted to production.

Simply be effective.    

Sources consulted


Leave a Reply

Your email address will not be published. Required fields are marked *