Optimizing Test and Verification Costs with Modularity


By Roger Kulläng & Thomas Enocsson

 

With the blog What is the Cost of Developing, Maintaining, and Deploying a Software Architecture? we started to unravel main cost distribution areas in software development, and in Cost-Efficiency Unleashed: The Power of Modularity in Software Development we covered how to reduce cost in writing code, i.e. software development phase. Another large portion of the costs are, in the test and verification(T&V) of the software; and in this blog, we will dive deeper into how to reduce T&V by utilizing clever modular thinking.

We will try to cover how you; through modularity; can reduce costs through:

  • Finding errors early with module interface tests
  • Easier type approvals and certification tests
  • Investing in a modern and efficient test infrastructure
  • Releasing code in a modern DevOps fashion

Let’s start with the most obvious way of reducing test and verification costs, finding errors early in the development process.

Finding errors early with module interface tests.

This topic was covered to a large extent in a previous blog post called How can Modularity Improve Software Testing?. It is, however, worthwhile to reiterate some critical parts of this blog when relating to costs.

The cost of fixing bugs in software development varies significantly depending on the stage at which they are discovered and addressed. Let’s make a breakdown of how costs can escalate at different stages of the software development:

  • Requirements phase: The costs related to fixing bugs here are at their lowest. Bugs are typically easier and cheaper to fix because they may only require changes to documentation or specifications, and we can refer to this cost as x1 (times 1)
  • Design phase: The costs are increasing to about 3 – 5x. Bugs found here may require changes to the software’s architecture or design patterns, but no coding has been implemented yet.
  • Coding phase: The costs grow to around 10x. Developers may need to spend time debugging and retesting their code, which can reduce velocity and delay other tasks as discussed in our previous blog post.
  • Testing phase: The costs increase to 15 - 20x. Bugs might not only require code changes but also re-testing the entire module or even the whole system to ensure nothing else was affected. Fixing bugs in testing hence costs equal to all previous phases combined and is also your last chance of catching the bugs before you ship code to your customers.
  • Post-release: The cost of fixing a bug here skyrockets and can be anywhere from 30 - 100x or more of the initial cost. This is due to the potential for extended system outages, rollback procedures, delays to new features, lost revenue, and a negative customer experience which could damage your business reputation.

Figure 1 Relative Cost of fixing bugs

Figure 1. Relative cost of fixing bugs in different phases of development

The Systems Sciences Institute at IBM[1] reported that it costs 6x more to fix a bug found during implementation than one identified during design. Furthermore, the cost to fix bugs found during the testing phase could be 15x more than the cost of fixing those found during design.

Therefore, investing in the testing phase is a lot less costly than letting the bugs slip through to the customer, even though it can relate to significant spending in infrastructure and personnel. One could argue that you should invest even more in the requirements gathering phase of development since these costs would bring “more bang for the buck”. However, today it is proven that even if you spend eons of time and money in this phase, you may still come up with the wrong requirements, since the development of a complete software system is highly time-consuming, and, when the system is ready, the customers may have changed their minds and other requirements are now more important. This was the reason why Agile development practices were introduced.

It is common to implement unit testing on component level of the code while in development and apply system and even sub-system level tests to find bugs while in the test phase.

What we are proposing is to also include automated module tests in the build pipeline, which ensures that:

  • Modules are interchangeable and follow the module strategy.
  • Only the module interfaces are exposed to the outside world, hardening the system.
  • Innovation and new features can be introduced over time without the need to re-test the entire module system.

If the module interface test fails, this is a signal that you may end up into issues updating deployed existing software. Specifically, this is critical if you have plans to support Over-the Air Updates (OTA) without the need to replace all software in the product in a monolithic fashion. For large software systems, monolithic software updates can often be complicated and something that should be avoided. It also risks introducing bugs into well tested parts of the software system unnecessarily.

Module Interface tests can also serve another purpose described in the next chapter.

New call-to-action

Modular type approval & certification tests

Type approval and certification tests is a process applied by national authorities to certify that a model of a product meets all safety, environmental, and conformity of production requirements before authorizing it to be placed on the market.

The process varies depending on the product, but generally, it involves the following:

  • Examination of Technical Documentation
  • Definition of the Test Program: A test program is defined based on the product and its specifications.
  • Identification of the Testing Laboratory: A testing laboratory capable of conducting the necessary tests is identified.
  • Execution of Type Tests: The product undergoes type tests as per the defined test program.
  • Review of the Test Reports: The test reports are reviewed to ensure the product has met all the necessary requirements.

If all relevant requirements are met, the type approval certificate is issued.

Modularized software can significantly support type approval tests and certification tests in several ways:

  • Ease of Testing: Each module in a modularized software can be tested independently, making it easier to identify and correct issues. This can lead to a more efficient and effective testing process.
  • Adaptability: Modularized software can be easily adapted to meet the varying legal regulations and technical requirements of different markets. This is particularly beneficial for type approval tests, which often involve compliance with specific regional or national standards.
  • Documentation: With modularized software, it’s easier to prepare comprehensive documentation for each module, which is often a requirement for type approval and certification tests.
  • Updates and Maintenance: If a module needs to be updated or modified, it can be done without affecting the rest of the system. This makes maintaining compliance with evolving standards more manageable.
  • Reusability: Once a module has been approved or certified, it can be reused in other systems, reducing the need for repeated testing.

In summary, modularized software provides a solid foundation for implementing Approval & Certification tests, ensuring reliability and quality in complex and ever-evolving codebases. With that said, the success of using modularized software for type approval and certification tests also depends on factors like the quality of the software design, the testing procedures in place, and the expertise of the team involved.

Investing in a modern and efficient test infrastructure utilizing intelligent test schemes

As described in previous chapters, it makes a lot of sense from an economical perspective to invest significantly into test infrastructure to catch errors before they reach the customer and at the same time make it easier to oblige to different testing specification for certification and type tests.

However, before investing into an advanced test infrastructure there are some common limitations that you should be aware of:

  1. Limitations in test hardware, i.e. specific test rigs that only are available in limited supply and availability.
  2. Limitations in testing personnel, usually associated with many manual tests.
  3. Too large a software system to test, i.e. the time required for running all the tests is greater than the total time available for testing.

All the above limitations can become troublesome, since this indicates that you need to prioritize which tests to run, i.e. contradictive to the mission to test everything continuously.

Utilizing the power of modularity here can for sure be worthwhile, let’s see how:

1. Limitations in test hardware

This seems to be the most straightforward test limitation, i.e. invest in more test rigs to speed up the testing effort – right?

This could still be a problem if the test rigs require a lot of physical space, and you have limited storage available for the equipment. If you as an example are into the stone crushing business, you might not have enough space to test all possible combinations of stone crushers in your testing facility. You also tie up a lot of capital in test equipment, that you can’t sell but need to continuously upgrade to keep relevant. In this case, looking at virtualization technologies and simulated hardware would be a good idea. This would limit the amount of actual hardware you would need to test on, but what testing strategy should you use, and what would be the interface towards the simulated hardware?

With a module system that has identified interfaces towards hardware and a module strategy in place that understands what variance in hardware you expect now, and, in the future, you would be better prepared for setting up a simulation environment that tests all aspects of your software. Modules that should have a loose connection to hardware can be tested with simulated hardware and modules with a tight connection would need strong interface tests on module level to understand if the modules are generating the expected results regardless of the rest of the system.

For software that isn’t hardware dependent, which usually is a large part of the software, you can potentially take it one step further and virtualize it so that it can run on any type of PC hardware through virtualization/docker technologies. That would allow a testing system that can utilize the power of the cloud to test.

2. Limitations in testing personnel

Instead of investing in more hands and feet to continue doing manual tests with human labor, strategically automating the manual tests makes a lot more sense. Not only will it free up precious test resources for more intelligent exploratory testing of novel parts of the system, but it will be easier to innovate while going forward with the software development when there is a powerful automated test infrastructure that you can rely on to ensure that the code will run great also in the future. We previously covered this in-depth in our blog around software testing.

In essence, investing in automating as many tests as possible will free up testing resource to do valuable exploratory testing of new functionality without any additional testing costs, i.e. same or lower cost in testing but higher software quality leading to less costs in “post-release” activities.

3. Too large software system to test

It doesn’t matter how great your test infrastructure and organization are if you can’t run enough tests in parallel due to a monolithic software architecture. Modularization should be done to ensure that you can run more tests in parallel to get test results faster. With test results faster, developers will be more efficient in finding and solving bugs, i.e. reducing overall testing overhead costs.

This may still not be enough to get the improvements in test feedback you need. Then it may seem tempting to limit testing to software modules that have changes. The problem with this is that in some cases changes to one part of the software system leads to side effects in other parts of the system.

One modern way to solve this is to involve Machine Learning technologies in the test selection, called Predictive Test Selection. In this method, machine learning models are trained to trigger tests that usually fails depending on what part of the software system was changed and this can significantly reduce the amount of tests that are run while keeping the quality of the produced code high as shown in this interesting Meta blog post . This was cutting edge in test automation a few years ago, but today there are off the shelf testing frameworks that will help you achieve this, for example, Predictive Test Selection from Launchable and we can probably assume that a lot more will happen in this field as AI technologies evolve.

Releasing code in a modern DevOps fashion

Instead of shipping “big bang” annual major releases, releasing code continuously and with small increments is the modern way of working today. This “DevOps” way of working can create great new opportunities, but if you have a legacy codebase it can be difficult to achieve. Is there a way to introduce this way of releasing software without having to rewrite the code you have already have, and can you test it to ensure that it reaches the same (or preferably higher) quality levels as before?

One example taken from the telecom industry is when functionalities, that traditionally was running on proprietary hardware and on-premise software, where moved to cloud infrastructure. This is possible since features such as 5G Network Slicing are primarily made by software and running it on cloud infrastructure gives better possibilities for scale, uptime and cost optimizations. At the same time, there is a lot of legacy systems still out there, in need of maintenance and feature upgrades. Focusing solely on the new cloud infrastructure would leave many existing customers with on-premise systems unhappy and with a feeling of abandonment.
The challenge for the telecom companies is to avoid rewriting all legacy functionalities for the cloud to reach the same level of functionality as they had in their legacy system and without having to test all of the code in parallel tracks several times.

Figure 2 Supporting multiple software stacks at the same time

Figure 2. Supporting multiple software stacks at the same time

Creating a modular architecture with clearly defined interfaces and functions enables creating software where you can mix both legacy and new code developed in a DevOps fashion. This can then enable you to utilize in parts what you already have created and validated over time, and then change only the areas that need to change to enable the new functionality. In this way, the customers will never experience a degrade in performance & features and the development organization don’t have to implement the same thing twice.

Another key area to address is how to test and release the software. Many companies that have been releasing software successfully before the “cloud era” have invested a lot in a CI/CD (Continuous Integration/Continuous Delivery) flow were code changes are merged into a main branch and hardened through tests in clearly defined stages before releasing it in a monthly or quarterly release cycle and where the entire application is built as a single, interconnected unit. To achieve this companies invested heavily in creating automated integration and test environments with a mix of new and old supporting tools and infrastructure. The infrastructure is typically owned and managed by the organization itself, which means that the organization is responsible for provisioning servers, setting up networking, configuring storage, and managing all aspects of it which can be a very large fixed cost that doesn’t go away even if the infrastructure isn’t used.

In contrast, a cloud-based microservice CI/CD flow leverages cloud computing resources for much of the infrastructure. In this model, the cloud service provider owns and manages the infrastructure, and the architecture is decentralized. The application is broken down into smaller, loosely coupled services that can be developed, deployed, and scaled independently. This approach creates a fundamentally different development environment. Costs in this type of setup come from actual consumption when the CI/CD flows are utilized. This means that limitations in how much you use the build pipelines and test frameworks will benefit the bottom line of the company.

Now, how do you avoid losing the investment you’ve made and ensure legacy support without reinvesting in and replacing the older environment? The cost and time required for such changes can be substantial. Here’s where a modular architecture with clearly defined interfaces and functions comes into play. By creating software that combines both legacy and new code, you can reuse existing parts of the CI/CD machinery to verify the code without having to reinvest in duplicated test cases. This substantially reduces the necessary investments.

However, a word of caution: always consider the entire flow. Some software and test flows might hinder your ability to achieve the necessary development and test time for successful “DevOps” practices, one example could be lack of automated testing capabilities within a certain part of the test suite. If certain areas are preventing you from meeting your customer’s lead time requirements, consider redeveloping the software and adjusting the CI/CD flows. Investing time and effort in developing a joint test strategy is crucial. Rather than viewing it as a cost, recognize it as a valuable investment to avoid future expenses.

Summary

The intent of this blog was to explain how good use of modularity helps to reduce costs through finding errors early with module interface tests and make it possible to get easier type approvals and certification tests. We also described how investing in a modern and efficient test infrastructure and releasing software in a modern DevOps fashion could help you to reduce costs while enabling a more efficient way of working.

A blog like this can only scratch the surface of what your company may achieve when going down the path of creating a modular software architecture. The potential cost savings differ widely, and we specialize in helping companies to understand what these potentials are and how to unlock them.

New call-to-action


Want to know more? 

If you would like to know more, don’t hesitate to contact us directly via e-mail or LinkedIn!

Profil Roger 400400

 

Roger Kulläng 

Senior Specialist 

Email

LinkedIn

 

 

Thomas E square

 

Thomas Enocsson
EVP and President Modular Management Asia Pacific AB

Email

LinkedIn

 

 

References:

  1. [1] IBM System Science Institute Relative Cost of Fixing Defects