Software as a process
A single software product may comprise multiple suppliers and sources, making safety, security and performance processes even more important.
By Rod Cope, CTO, Rogue Wave Software.
The past year was a tipping point for the software industry, where the impacts of code faults in the field were felt far beyond the limited confines of our own development teams. Widespread automotive recalls and events such as Heartbleed and GHOST hit home for all walks of life, linking real safety and security issues to the phrase ‘it’s a software bug’. There were plenty of good stories (the Rosetta space probe landing on a comet) to go with the bad stories (the Target data breach, costing shareholders $148m), yet the overall responsibility is on the industry to do better.
To provide some context, in the past ten years the number of data breaches in the United States has climbed steadily and will reach a predicted peak of 800 instances in 2015 (see image below). In the automotive industry, over 900,000 vehicles are affected by recalls each year, according to The Automobile Association in Britain. These trends are not entirely surprising. As companies try to one-up each other with continual innovation and more features, with outsourcing overtaking in-house development, software complexity has gone far beyond our ability to find bugs effectively.
The cornerstone of embedded development is software, and software is where most errors are introduced. Not only has the volume of delivered code increased, the complexity and variety of architectures, platforms and protocols has increased too. This pushes the number of permutations of state, behaviour, interactions and outputs well beyond our capabilities to test. And it’s not just the code itself, other influences have affected our ability to deliver solid, reliable products.
Today’s software products are the result of many suppliers, vendors, open source repositories and legacy code coming together in a mix of different processes, standards and cultures. Each input offers a chance to introduce safety, security, or performance-related errors. While some integrators are good at enforcing consistent standards and quality guidelines, most struggle to achieve comprehensive testing across all inputs that fits into tight timelines and cost constraints. Trust and enforcement are the key differentiators when it comes to the software supply chain.
Whether it’s the shift towards agile, continuous integration, or the adoption of new standards, embracing new ways of developing software hits organisations where it counts: the delivered product. It takes time for teams to understand and normalise new processes and, for companies with limited resources, it’s very likely that either quality or quantity (or both) suffer. When you add in the risks associated with different teams using different processes, the possibility of a defect reaching the field is even higher.
A relatively new source of increased complexity is the IoT. No longer do software vendors have to worry only about their own products, they need to account for the potentially untested or unvalidated inputs coming from other systems as well. For example, the connected car opens up new opportunities for attacking automotive systems, such as remote connections through in-vehicle infotainment systems and wireless vehicle services. With these fault vectors and more, it’s no wonder that software bugs are making the headlines. The good news for automotive is that other industries have figured out strategies that can help stop defects from getting out into the open.
While some industries are very familiar with coding and safety standards, others are just beginning to adopt them, recognising that standards give valuable goals to achieve and measures of how to improve. Automotive companies have been using coding and safety standards, such as MISRA and ISO 26262, for some time now but they are just starting to investigate how security standards can help protect against hackers. Adopting common, community-driven security standards such as OWASP, CWE and DISA STIGs are essential for both educating development teams on what makes code secure and measuring how secure their code actually is.
Most companies use open source to optimise their engineering costs without realising the potential risks to security, technical quality or licensing liability. Moreover, many companies may not even know where open source is used or delivered, as it’s fairly easy for any developer or supplier to include code without anyone knowing about it.
Security breaches continue to grow (source: Identity Theft Resource Center, analysis: Rogue Wave IMSL Numerical Libraries)
To minimise the risks, companies should adopt open source policies and governance platforms that formalise the acquisition, provisioning and tracking of open source code. This helps eliminate inconsistencies in versioning and licensing and tracks where packages are deployed, so issues can be isolated faster. Organisations can also adopt open source scanning tools to identify where both the known and unknown packages are, to identify potential risks and better inform testing activities. It’s also important to ensure that policies and tools are consistent across the supply chain, otherwise the weak link may cause rise to an issue.
The threat of hackers, data loss, and system downtimes persist across all industries, and with the advent of more communications and connections embedded systems are not as protected as they once were. The challenge is two-fold: educating development teams on how code can be exploited, and adding testing techniques to find potential security flaws before they’re released. The most important point to remember when it comes to security is ‘be paranoid’. Don’t trust inputs coming into the system, place strict controls on suppliers and ensure that all inputs are validated and restricted, to protect code from malicious data and control.
One method that is proven to be successful in mitigating security risks is using automated code analysis to look for potential flaws. Capers Jones of Namcook Analytics found that, without tools such as Static Code Analysis (SCA) in particular, developers are less than 50 percent efficient at finding bugs in their own software. SCA is adept at understanding patterns and behaviours in code, across multiple compilation units and developers, to reveal security holes such as buffer overflows, suspicious incoming data and unvalidated inputs. More sophisticated SCA tools can also compare code against common security standards, such as OWASP and CWE, to determine gaps in coverage or generate compliance reports. Rather than convincing teams to spend more effort on security testing, use tools to reduce the effort for you and your suppliers.
The benefits of continuous integration and testing have proven to be effective for many organisations, allowing them to deliver more robust features at a faster pace. The is the result of putting the burden of common or complex development tasks onto tools that perform in the context of frequent check-ins and builds. When switching from traditional testing methods to continuous integration, it’s critical to adopt these kinds of tools to keep defect rates low and developer frustration to a minimum. It’s also important for these tools to be as comprehensive as possible, testing not only for technical defects but security flaws and standards compliance as well. Using the static code analysis example, adopting a tool that covers all the programmatic, security and standards bases as well as fitting into a continuous integration model means additional, costlier tools won’t be necessary.
Complementary with continuous integration is continuous improvement; how effective are these measures and how can they be made better? The first step is to establish metrics and develop reports that help track defect trends (both number and source), compliance to standards and developer activities to better understand where the problems are and where effort is being spent. Ideally this data is collected and reported automatically by the development tools so the teams don’t have to worry about it.
The second step is to monitor trends, issues and activities regularly, to be able to respond as quickly as possible. For open source, using a governance platform that alerts teams to security vulnerabilities in open source packages is an effective method for identifying problems early and preventing flaws from getting into the released product. This is especially important for open source as most developers don’t think twice about the robustness of open source code and rarely subject it to the same rigorous testing as their own code.
The rapid growth in automotive complexity, connectivity and the software supply chain emphasises the importance of getting security, safety and reliability under control as soon as possible. Embracing techniques and adopting tools that are proven in other industries will help create systems that stay out of the headlines and deliver a solid path for future innovation.