Diagnosing the threat to connected healthcare apps
Dealing with cyber threats has become a standard challenge in any sector, but one of the most alarming possibilities is an attack on medical technology, where the risks make the jump from digital to physical. The potential threat was highlighted recently in research that revealed attackers could potentially access anaesthesia devices due to an apparent security flaw.
By Chad McDonald, VP of Customer Experience at Arxan Technologies
Researchers discovered that if a threat actor had access to the hospital network to which the machine was connected, they could force it to use a less secure communication protocol. They would then be able to alter parameters of the machine such as gas mix, pressures and warning alarms, all of which could have serious consequences for patients in the event of an attack.
Such stories are nothing new. In 2017, the US Food and Drug Administration (FDA) released a security advisory regarding vulnerabilities found in Abbott Laboratories’ pacemakers. These vulnerabilities could have let cyber criminals access the devices to “modify programming commands to the implanted pacemaker, which could result in patient harm from rapid battery depletion or administration of inappropriate pacing”. This forced Abbott to roll out firmware updates to almost half a million pacemakers.
Alongside the potential direct threat to patients, unsecured healthcare apps provide cyber criminals with a way to steal data and break into healthcare networks. A vulnerability test conducted on 71 mobile health apps in the US, UK, Germany and Japan showed that they were exposed to several cyber security risks, such as the exposure of sensitive data and issues with authentication. Many of these weaknesses were not picked up or dealt with during typical quality assurance processes. Several of these apps have been approved for use by the FDA in the US, as well as the NHS in the UK.
Why cyber criminals are targeting healthcare
Threat actors can exploit the vulnerabilities in applications through reverse engineering, where an app is effectively dismantled and rebuilt – something which is not preventable through traditional perimeter and network security measures. There are many reasons why cyber criminals want to explore the inner workings of an app in order to seek out its weaknesses. Getting a hold of encryption keys and application programming interface (APIs) can enable a threat actor to target the servers the app communicates with.
Criminals may also seek to directly steal the health information and payment details within the app, with medical data in particular providing an easy payday for a criminal. The Trustwave Value of Data report highlighted that healthcare records attract some of the highest prices on the black market compared to other types of data. Values can range from just $5 for records that may be out of date through to more than $1,000 for the medical data of someone specific or high profile.
Cracking an app’s security could enable a criminal to reverse engineer it and tamper with the code or the API call to change the function of the application or the data that it is trying to access, creating their own pirated version. These copies can be sold for profit or implanted with malware or malvertising and uploaded to alternative download sites to snare unwary consumers.
These compromised apps might also include code that allows the cyber criminal to either take control of the associated device, or syphon off data generated or collected by the app. They could even be used to insert trojans and viruses onto the network of a healthcare provider if the app is able to connect to the network.
An even more sinister motive for breaking into a healthcare app sees the threat actor change the electronic medical records of an individual or group of patients. This could result in misdiagnosis of illnesses, medications being needlessly prescribed, irrelevant medical procedures being carried out or necessary ones not being carried out at all. The ramifications of this could be life changing or worse for those concerned and could also lead to prescription drugs making their way into the wrong hands.
Following the typical security path
Organisations wanting to protect their apps against unauthorised access will no doubt approach their security in a conventional manner. This includes verifying the identity of the person trying to access the app through the use of mechanisms, such as multi-factor authentication and biometrics, while also using encryption to protect data.
However, apps and the APIs embedded within need a more multi-layered approach than this, given that they are endpoints outside of the usual IT network. That these apps can be deployed on thousands of different machines means monitoring them is next to impossible. Indeed, once an app has been released, the IT team has little oversight into how it is being used, or what users are doing with it.
In an effort to make applications as secure as possible, developers use a range of pre-deployment tests to measure its resilience. Standard measures including static app security testing (SAST) which scans an application before the code is compiled, and dynamic app security testing (DAST) which detects potential vulnerabilities when the app is up and running. Developers may also apply interactive app security testing (IAST) to look at vulnerabilities around specific user interactions, and mobile application security testing (MAST) which addresses specific security challenges for mobile apps.
Yet there is no getting away from the fact that code, functions and API secrets within an app remain unprotected after it has been deployed, leaving it vulnerable from the beginning of the reconnaissance phase of an attack. The information provided to an attacker within the code could provide a roadmap on how to further infiltrate an organisation and access patient data or other sensitive information including business critical IP.
Apps can be downloaded and used by anyone -- legitimate users or cyber criminals -- yet the companies deploying the apps have no way of knowing this. Traditional mobile application protection tools do not provide real-time information about data collection, monitoring or analytics, which is needed to understand the threat posture of apps, and to react to attacks in a timely way. The result of this is that many medical app developers are unlikely to find out that their app security has been compromised until it is too late.
To minimise the security risks once an app has been released, organisations need to implement a complete security ecosystem. There is no silver bullet that will stop threat actors exploiting an app; no one security solution that will defend against those with malicious intentions. Instead, there should be a raft of security measures in place that protect against any type of attack that cyber criminals could imagine perpetrating.
Protecting app code
To prevent the reverse engineering and understanding their app function, developers need to protect their code. If it is exposed, threat actors can not only view it, but also manipulate it and inject their own malicious code -- or use the information found within the code to launch secondary attacks on APIs or back-end infrastructure.
Code obfuscation is particularly effective as a first layer of defence, as it obscures the source code with a series of random machine code. This means that even if an attacker is able to reverse engineer the app, they will only be rewarded with a collection of unreadable gibberish where the source code should be. While obfuscation renders the code much more difficult for any unwanted intruders to penetrate, it allows the app to function exactly as intended.
Apps can also be protected with defences that proactively defeat attempted attacks. Placing active app defences within an app’s code will enable it to detect attempts to tamper with it, causing it to immediately shut down. Another useful capability is jailbreak detection, which will allow it to determine if the app has been uploaded onto a high-threat jailbroken environment necessary for reverse engineering it. The app can again be instructed to shut down if it detects an unusual environment.
Finally, information is key in preventing cyber attacks, and some of the most valuable information in the fight against threat actors comes from how people and machines are interacting with the app. By gaining real-time data analytics on emerging threats and vulnerabilities, security teams can optimise their defence strategies.
Healthcare data is some of the most valuable information there is. This is not because of the monetary value to cyber criminals, but what it says about, and the impact it can have upon those it concerns. Protecting this information needs to be a top priority. Yet at the same time, healthcare institutions want to streamline and improve services through a greater use of technologies, which are targets for threat actors.
Ensuring the safety of data, and ultimately patients, while at the same time using cutting edge technology, means that healthcare app developers need to take app security seriously, and build safeguards to prevent their creations being manipulated by unauthorised agents. Regardless of whether the discussion is about the health of a person or of an app, prevention is key.