Design

Are High Reliability Applications Safe?

9th May 2013
Nat Bowers
0

We increasingly rely on systems where an error could cause financial disaster, organisational chaos, or in the worst case, death. Could the mandatory use of open source software improve safety and security in High Reliability applications? Robert Dewar, Co-founder and President of AdaCore, explores this question in detail in this insightful article from ES Design magazine.

Software now plays a crucial role in all complex systems, some of the start-up problems at Heathrow Terminal 5 have been attributed to computer 'glitches,' for example, while modern commercial airliners depend on complex computer systems to operate safely.



If we go to the necessary trouble and expense, we are actually pretty good at creating near error-free software if the specification is very clear. Not one life has been lost on a commercial airliner due to a software error. That's not a bad record.



However, we do definitely have cases of people being killed by software errors, notably a patient was killed by an excessive dose of radiation from a medical device, and a Japanese worker killed by a berserk robot. Both the latter cases could probably have been prevented, by imposing more stringent controls on the relevant software.



Indeed from the point of view of preventing bugs, we have pretty good technology if we care to use it. In some cases, we can use mathematical 'formal' methods to demonstrate that the code is error-free. Such an approach is being used for iFACTS, the new air traffic-control system for the UK.



So perhaps we don't have too much to worry about and this article may end up being little more than a plea for education, so that the techniques for generating error-free software (for example, the various safety standards used for avionics software) would be more widely adopted.



However, the world around us has changed since September 11th, 2001, and the subsequent attacks on London and Madrid. Now it is not sufficient to assure ourselves that software is free of bugs; we also have to be sure that it is free from the possibility of cyber-attacks.



Any software that is critical is a potential target for attack. This includes such examples as the software used to control nuclear reactors, power distribution grids, chemical factories, air traffic control ... the list goes on and on.



Safe And Secure?



It is very much harder to deal with protecting software against such attacks than making it error free. Consider for example the important tool of testing. No amount of testing of software can convince us it is secure against future attack modes that have yet to be devised.



To think otherwise would be to take the attitude that since no one had attacked the world trade centre for decades, it must have been safe from future attacks. So how do we guarantee the security of software?



On an episode of the American television series 'Alias', Marshall, the CIA super-hacker is on a plane, clattering away on the keyboard of his laptop during take-off preparations. When Sydney tells him he has to put his laptop away, he explains that he has hacked into the flight control system to make sure the pilot has properly completed the take-off checklist.



Just how do we make sure that such a scenario remains an amusing Hollywood fantasy and not a terrifying reality? In this article, we will argue that one important ingredient is to adopt the phrase from the movie Hackers 'No More Secrets', and systematically eliminate the dependency on secrecy for critical systems and devices.



The disturbing fact is that the increasing use of embedded computers, controlling all sorts of devices, is moving us in the opposite direction. Traditionally, a device like a vacuum cleaner could be examined by third parties and thoroughly evaluated.



Organisations like Which? in the UK devote their energies to examining such devices. They test them thoroughly, but importantly they also examine and dismantle the devices to detect engineering defects, such as unsafe wiring. If they find a device unsafe it is rated as unacceptable and the public is protected against the dangerous device.



But as soon as embedded computer systems are involved — and they are indeed appearing on even lowly devices like vacuum cleaners — we have no such transparency. Cars, for example, are now full of computers and without access to the software details, there is no way to tell if these cars are 'Unsafe at Any Speed'.



Why is this software kept secret? Well the easy answer is that nearly all software is kept secret as a matter of course. Rather surprisingly, in both Europe and the USA, you can keep software secret and copyright it at the same time; surprising because the fundamental idea of copyright is to protect published works.



Companies naturally gravitate to maximum secrecy for their products. The arguments for protecting proprietary investment and Intellectual Property Rights seem convincing. The trouble is that the resulting secrecy all too often hides shoddy design and serious errors that render the software prone to attack.



Can we afford such secrecy? I would argue that in this day and age, the answer must be no. First of all, there is no such thing as a secret, there are only things that are known by just a few people. If the only people with access to the knowledge is a small number of people at the company producing the software and there are some bad guys willing to spend whatever it takes to discover these secrets, do we feel safe?



At a recent hacker's convention, there was a competition to break a Windows, Mac, or Linux operating system using a new technique, hitherto unknown. The Mac was the first to be successfully attacked, in under two minutes.







Freely Licensed Open Source Software



In recent years, a significant trend has been far greater production and use of FLOSS (Freely Licensed Open Source Software). Such software has two important characteristics; firstly it is freely licensed, so anyone can copy it, modify it, and redistribute it. Secondly, the sources are openly available, which means it can be thoroughly examined and any problems that are found can be openly discussed and fixed.



What we need is to establish the tradition that the use of FLOSS is at least desirable, and perhaps even mandatory for all critical software. Sure, this makes things a little easier for the bad guys, but they were willing to do whatever it takes to break the secrecy anyway. Importantly what this does is to make it possible for the worldwide community of good guys to help ensure that the software is in good shape from a security point of view.



At the very least, we can assure ourselves that the software is produced in an appropriate best-available-technology manner. If we opened up a television set and saw a huge tangle of improperly insulated wires, we would deem the manufacturing defective. The embedded software in many machines is in much worse state than this tangle of wires, but is hidden from view.



There are two aspects involved in the use of FLOSS in connection with security-critical software. First we gain considerably by the use of FLOSS tools in the building and construction of such software. One way that software can be subverted is, for example, to use a compiler that has been subverted in a nefarious manner. For example, suppose our compiler is set up so that it looks for a statement like:



if Password = Stored_Value then



and converts it to



if Password = Stored_Value

or else Password = Robert Dewar




Then we are in big trouble, which we can't even detect by close examination of the application sources, since there is no trace there.



Dennis Ritchie, father of the C language and a key influence on the development of Unix, warned of such subversion in his famous Turing lecture 'Reflections on Trusting Trust'. It is far easier to subvert proprietary software in this manner than FLOSS.



After all, early versions of Microsoft's Excel program contained a fully featured flight simulator hidden from view. If you can hide a flight simulator, you can easily hide a little security 'glitch' like the one described above. This would be far harder to do with a compiler whose sources are very widely examined.



The second aspect is to make the application code itself FLOSS, allowing the wider community to examine it. Now Which? magazine could employ experts to look at the software inside the vacuum cleaner as part of their careful evaluation, and reject devices with unsafe software.



The arguments above seem easily convincing from the point of view of the public, so what's the objection? The problem is that companies are dedicated to the idea that they must protect proprietary software.



An extreme example of this is Sequoia Voting Systems, which has reportedly refused to let anyone examine the software inside its machines on the grounds that it is proprietary, with threats of lawsuits against anyone trying to carry out such examinations.



Here we have a case where one company is putting its proprietary rights ahead of essential confidence in our democratic systems. The situation with voting machines is perhaps even more critical in Europe, where in some cases, e.g. for the European Parliament elections, complex voting systems are used, where we totally depend on complex computer systems to implement these systems accurately and without possibility of subversion.



What's to be done? We do indeed have to ensure that the proprietary rights of companies are sufficiently protected that there is sufficient incentive to produce the innovation we desire, but this cannot be done at the expense of endangering the public through insecure software.



In most cases, the real inventions are at the hardware level, where traditional protection, such as patents (which require full disclosure) operate effectively. Perhaps innovative forms of copyright protection can provide adequate protection for software, though in most cases I suspect that such protection is not really needed.



Suppose Boeing were forced to disclose the software controlling its new 787 'Dreamliner'. Would this suddenly give Airbus a huge advantage? Most likely not, as you can't just lift the 787 avionics and drop them into an Airbus 350.



Yes, probably Airbus could learn useful things by studying the software, just as they learn useful things by studying the hardware and design of the Boeing planes. If everyone were forced to disclose their software, then this kind of cross-learning would actually benefit competition and innovation.



We can't just hum along on our current path here. The world is a more and more dangerous place and the increasing use of secret software that is poorly designed and vulnerable is increasing that danger. We have to find ways of addressing this danger, and more openness is a key requirement in this endeavour.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier