In a previous blog post, From a Hardware-First to a Software-First Business, I described how you can benefit from starting software architecture early when developing a modular product platform, instead of pushing software late, after hardware and electronics development. I also briefly described that there are many good reasons for companies to keep large chunks of their old software system when improving their software architecture. This is what I will expand upon in this blog post.
It is very tempting to start with something entirely fresh when you are investing in the next generation software architecture. There are great promises of being able to utilize all the new and exciting technologies that have been developed over the recent years instead of keeping old “dusty” code that was created with technologies from the past.
As a manager, you must forgive your developers for believing in this promise, since it is human nature to strive for something new and promising. Furthermore, since code is easier to write than to read, developers tend to believe that code written by someone else is bad.
This is, however, a classic software architecture mistake that should be avoided. History has many bad examples of software projects that crashed due to wrong ideas about old code and designs.
Tearing legacy out by the roots is a bad idea in software, and this blog post will dig a deeper into this, quantify the costs and risks that are hidden within a rewrite project and describing why it makes much more sense to refactor existing code.
Refactor, rewrite, do nothing?
The technological advancements in the field of software development have been staggering for several decades, with few signs of stopping. On the contrary, it is accelerating with application after application that was previously natively hosted for a specific operating system and devices such as a Windows PC, a MacOS MacBook, or an XBOX and now moving to a cloud hosted environment enabling them for a greater public at anytime and anywhere. A typical car of 2022 has hundreds of millions of lines of code, compared to maybe a few hundred thousand in the beginning of the millennium. Expectations on future cars are like those for a smartphone or tablet. They should have great apps that automatically improve over time and integrate with the home automation systems as well as host a powerful AI-driven assistant such as Google Assistant, Alexa, or Siri.
When developing a product platform for a complex system such as a car, it is natural to also include the electronics and software since many features are powered by software and high-speed internet connectivity.
My favorite feature is the cloud powered detection of slippery roads that some cars of today have integrated. Sensor data from your car is compared with data available in the cloud from other cars and then signaled back to you on your car dashboard, warning for the slippery road that is coming up ahead. That kind of stuff would simply have been impossible previously when the only sensor was a thermometer, and the reading was presented to the driver without further analysis. Today the information could be presented in the car information system seamlessly, to a degree that most people may not even think about the clever analysis and data sharing behind.
For electronics and mechanics, some existing modules may be carried over from previous platforms, but often many of the modules for a new product platform are brand new, since:
The same would apply to software, however, I would argue that the share of modules possible to carry over from the previous generation could typically be higher. Why? -Software does not have the same drivers of adoption the physical design, end-of-life of chips, etc. The interface towards a software module is easier to keep since it is purely logical. In theory, you only need to add functions to software, whatever worked before for existing functionally should not cease to work, and given proper modularization, the modules should be isolated from unnecessary change.
To create a deeper understanding of the value of code reuse we need to dig further into some of the issues that you will be facing when replacing an existing software system with a new one.
If your software has been around for a while, it’s a safe bet that the company has invested a lot in it over the years. That investment has not been in new features only, but also in tweaking and optimizing the behavior of the software system. There might be corner cases that seldomly happen that have been added or usage that wasn’t considered in the original design. There is a high risk that smaller tweaks and fixes aren’t fed back to the original specification of the software design but are lurking somewhere in the Change Management (CM) system for the software, in the best case. The code is maturing over time, but the specification of it remains.
This type of real-world matured code is equivalent of having “battle proven” code. It has seen a life outside of the highly theoretical drawing board and is functional to a degree where most current users are quite happy with it. The knowledge of how the product behaves is hence to an important degree buried in the code and not (only) in the design documents and requirements.
To add to this challenge people typically move over time. The developers that originally wrote, tweaked and fixed the code have moved on to other positions either inside or outside of your company by the time you are thinking about a rewrite. There is an obvious risk that you may end up with redoing the same mistakes and having to repair them once again. This can, depending on the size of the software system, be quite a substantial effort and in the worst case affect a behavior of the system that the customers have grown to like and expect. It is generally very annoying for customers if the same odd behavior that you stamped out of the product in the past resurfaces again in the new product.
If you are rewriting the only viable option is to first spend time on ensuring that the documentation is up to date with the actual code of today. You probably must put your best software architects on a thorough review project. Systems with a high degree of automated software testing are typically under more strict control since the expected functionality must be explicitly documented in the form of the test code. If this is not the case, an interesting possibility is to start with test automation before the software rewrite.
Professor Noriaki Kano formulated a model for strategic product planning and customer satisfaction in the 1980’s called the Kano Model. Today this model is well known and explains how exciting new features of a product of today will be qualifying requirements of a product tomorrow. Once a customer appreciates a feature, they will look for it when the product is replaced, even if this was not a feature that they even thought of when buying the previous product. The Kano Model is applicable for strategic product planning for software products, just as for hardware products and any combination thereof.
The Kano Model explains how some requirements are qualifying Must-be qualities while others are delighting attractive qualities.
Staying with passenger cars, some examples of features that where once exciting innovations but are now seen as fundamental qualifiers are Airbags, ABS brakes, Electronic stability control, Backup cameras and Smartphone integration. What are the innovative features of cutting-edge cars of 2022 that will be qualifiers ahead? Perhaps autonomous driving? Automotive companies are racing to create the systems that will be cutting-edge tomorrow and qualifiers in the long run.
But how does this apply to software architecture development?
When you plan for the next generation software you need to aim above the level of functionality that your current system has. Features that are not used or values can perhaps be phased out, but you must listen to the voice of the customer very carefully and know your market to know what functionality will be expected as qualifiers ahead, and what features that can be marketed as new and exciting for the customer.
Consider a rewrite to set up a new and improved software architecture. Implementing all these qualifyings and delighting features from scratch will take time. Competitors will not stand still and wait for you to complete the rewrite project. They will continue to innovate from their software platforms, raising the bar you must reach to satisfy your customers higher and higher. A rewrite that is not finished in time runs the risk of being deprecated before it is even released.
Let’s make a fictive example:
Imagine a company, iVac, that has been in the business of selling robotic vacuum cleaners since 1996. Since then, they have invested in their technology and made some major advancements but now in 2022 they want to create a new product platform that will be the foundation of their complete product range, including the software.
Up to this point, they have invested thousands of man hours into their software and have sold millions of robots that are operational across the world. The target for a new improved software platform is at the very least to have the same level of functionality as their latest software and then to add features made possible by the new hardware.
Target for a new software platform at the start of the development.
The project planned for 1.5 years takes 3 years to complete due to resource constraints and change of the requirements specification along the way (more on that below).
Target for a new software platform at the end of the development
At the point of release, 3 years later, the requirements on the software platform have changed quite significantly, as well as the available technology to use. A lot will happen in the software world over 3 years. A famous example of this challenge is the “vaporware” Duke Nukem Forever which took 15 years to develop and once finally released got an underwhelming response.
The competition has not waited for iVac to complete its project and made significant advancements that iVac needs to address. Recent cyber security attacks on home automation products have also driven new requirements for stronger protection against malware and network attacks. If iVac would have started a rewrite in 2022 chances are that they would not be ready to deploy anything competitive to the market even in 2025, delaying the innovative new hardware and possibly putting the entire company future at stake.
An example of a failed rewrite was Netscape Navigator 6.0. The code was rewritten from scratch by Netscape in 1997 since they thought the legacy code was rendering pages too slowly and was a structural mess. Navigator 6 never reached maturity and stability and was outcompeted by Microsoft with Internet Explorer 6 (IE6) due to lack of innovation in the available and stable Navigator 4.0. You can read more about this example and others in the excellent blog Things You Should Never Do, Part I by Joel Spolsky, written in 2000 but still just as valid.
Since the reason to rewrite typically is to enable new technologies that are more efficient and modern you must investigate just much these technologies will increase productivity and how soon this increased productivity can cover the cost of the rewrite. Compare this investment with the estimated life of the new architecture.
A rule of thumb is that if you can’t be at least 2 times more productive with the new tools and technologies it will not be worthwhile to invest in a rewrite.
A good example of greatly increased productivity was when the C and FORTRAN programming languages replaced Assembly as the main language to implement code in embedded devices back in the 90’s. Those who switched to these more productive languages would see a productivity increase by a factor of at least 3 (source: Software Economics and Function Point Metrics - Thirty years of IFPUG Progress ) which would justify the cost to rewrite the applications using the more productive language. These kind of technology leaps are however rare these days so careful productivity calculations are advised.
Given all the arguments against rewrites, what is really the alternative that you should look closer at to improve your architecture and implement modern technology?
One of the answers is obvious, look at what parts of the code that would need an overhaul and what parts that are working well and can be reused in the future design. Code doesn’t age, but the world around it changes, so typical problems with old code could be scaling or support for new software platforms and interfaces that didn’t exist when the code was written.
Also, requirements on cyber security have significantly increased throughout the last decades and are growing in importance. There are however straight forward technologies that are here to help you, such as secure boot, code signing, and secure credential handling (secure storage, strong encryption, etc.) and secure communication protocols that can be employed even on very old code. This makes sure that you know that the device is running code that was released by you.
If a software module is working as intended, there is no need to rewrite it just for the sake of structure. Instead, look at the interfaces and the features it implements and apply a modular strategy to it. Then write automated tests for the module so that you dare to do some internal refactoring (i.e. refurbishment) of the module, solving potential issues with the code such as more optimal memory usage, poor scalability, multi-threading issues, security, etc.
If you want to utilize new technologies to increase productivity, define what these new modules are (function, interfaces, and strategy) and what their purpose is to be able to add them to the modular software architecture. Technology advancements in graphics (HMI) and web/communications protocols are some key things to consider but there can clearly be other things to consider depending on the product.
Recommended read: What is a Strategic Software Module?
Another measure that is recommended when improving your software architecture is to define a Minimum Viable Product (MVP). An MVP consists of the features that will be the minimum accepted features that would make it possible to sell the product. The idea of defining an MVP comes from agile software development practices but is applicable to anything.
Let’s make a simple visual example on what an MVP is:
Figure 2. Create an MVP donut
(images licensed under https://creativecommons.org/licenses/by-nc/3.0/)
If the task is to create a donut, the picture in the middle and to the right satisfies the requirement however the picture on the left do not since the dough hasn’t been baked yet. The donut on the right is more complicated to create than the basic version in the middle and hence takes more time to put to the market. Therefore, the picture in the middle is the MVP of a donut. Once you have put the MVP on the market and start earning money on it, you can start looking at the fancier looking donut to the right to start earning money on that one too.
The benefit of defining an MVP is that you get the minimum requirements that your product must reach to be possible to put on the market and start generating income. This prevents scope creep and focuses resources on the things that are most important to introduce the improved architecture on the market without fully completing every aspect of it.
Software is not a product where you after project completion can ship it to customers and then don’t care about it anymore. This could have been the case in the 80’s and 90’s but now in the 2020’s those days are gone forever. Today, software is more like a living organism, which needs to continuously be looked after, improved, refactored, and updated. Therefore, it is important to design the software so that is has room to grow and have solid update mechanisms built in from day one.
Capabilities to be able to update the software over the air should be considered if they are not already implemented. It is a very convenient way of delivering new features to your product even after shipping, which usefulness for added business has been demonstrated with great success for example by Tesla. It is also one of the most effective cyber security defenses for embedded devices that otherwise risk being taken over to work in botnets.
So now, back to the topic of this blog post. Is it a good idea and well-spent money to do a complete rewrite of the software while developing a new product platform?
It depends on many factors, but usually, the answer is simple: “No”.
It is a golden rule of software architecture to avoid a complete rewrite since it can be a very painful venture to take part of and the risk of failure is high. Some companies have the financial muscles and capabilities to shake off such a failure, like Apple, Google, and Microsoft, but those companies also don’t rely on a single product to stay profitable. If you don’t have those kinds of muscles or don’t have a lot of other products that can keep you afloat, it makes sense to look at refactoring options instead since they are the shortest way to earn money from improving your design while limiting the risk.
Striving for a governed modular software architecture, where select modules can be developed over time is worthwhile and can help you stay profitable in the future. Modular Function Deployment is a method that sets a path towards this, some key actions are:
Before you start tearing your current software system up with the roots, make sure to look at what can be kept and what parts that need refurbishment. That will be time and effort well spent! For more suggestions for improving your software, check out this post on the Best Practices for Software Architecture in Hardware Companies.
Strategic Product Architectures is a passion of mine and I'm happy to continue the conversation with you. Contact me directly via email or on my Linkedin if you'd like to discuss the topic covered or be a sounding board in general around Modularity and Software Architectures.
Senior Specialist Software Architectures
roger.kullang@modularmanagement.com
+46 70 279 85 92
LinkedIn