Taking the right approach to medical device user interfaces
In this Q&A, Jason Clarke of Crank Software discusses the impacts of medical device UI and how to approach UI development.
Delivering exceptional design is key for any device being introduced into consumer markets, but can be vital for devices found in the medical world. In this environment, an intuitive user interface (UI) can make all the difference for devices used in time-sensitive, critical situations. As a result, embedded UI software engineers are playing an increasingly important role in helping save time that leads to improved patient care. However, UI design can be complex and time-consuming, impacting which product features can be implemented within a market window.
Q: Why has medical device UI become so critical today?
Medical device developers are upping their game in terms of providing a richer experience for users. Some of that has to do with market competition - they need to produce new devices with broader capabilities, features and more options for users. The challenge there is to develop a UI that provides easy access to all of those new features so that users can quickly and intuitively access the controls they need, especially in critical-care environments.
Device developers are also expanding their markets beyond traditional hospital and clinical environments where trained operators use the devices. Many new devices are being used remotely in homes or as wearables. That means the UI now has to appeal to consumers who are accustomed to cell phones and apps, and what may be a fairly complex device has to be understandable for patients using it on their own.
Q: What are the challenges medical developers are facing in terms of UI?
A key issue we see is a disconnect between UI development and device development. Many more devices now have screens to display information, provide feedback, and control the device and its options. In the past, user interaction was developed by the engineers, since they were the ones who were most familiar with the device’s capabilities. But developing an intuitive UI for onscreen usability requires a different set of capabilities in terms of understanding the needs and proficiencies of a wide range of end users.
Engineers may end up moving ahead to develop a product and manually writing code that has to be changed after usability testing. That causes expensive delays and redevelopment. This is where having a development framework that enables them to incorporate this valuable usability feedback into the application quickly and easily can help accelerate development resulting in faster time to market for their product(s).
Q: What are the impacts of UI on testing and validation?
User input and usability testing need to happen as early as possible in the product design, and it needs to be as close as possible to the final design. We don’t want to fully code the UI and iterate on that - it would be much too expensive and time consuming. But UI developers can use a development framework that lets them quickly design the UI based on user input and make it look and act exactly like it will in the final product. They can even create a functional UI mockup using Android on a tablet or cell phone for early product concept validation. From this early stage and throughout development, UI developers can get user feedback on everything from colour saturation to size and placement of touch buttons, drop-down menus, responsiveness - you name it. Iteration happens in parallel with product development so the UI can get to a stable point way ahead of the rest of the system, so testing and validation is smoother and more effective.
Q: Can’t UI be developed using traditional coding tools?
Manual coding requires a high level of technical skill, which adds costs to the project. Manual coding also means that every element in the UI layout needs to be created from scratch, which adds complexity and opportunities for errors to be introduced into the UI as well as the overall code.
Since everything is being coded manually, there’s additional time needed to debug the code on top of testing the UI itself. And the time from when coding begins to see a prototype UI on target hardware can be weeks.
Collaboration across a team adds new challenges, since everyone would require the same software and hardware setup to share and develop the UI, which also adds costs to the project. Often these projects use some form of open source coding framework, but that means a lack of structured support if they run into problems, the lack of patches or fixes to resolve potential issues, and a lack of enhancements that could improve their ability to code faster or introduce new functionality to their product.
Q: What are the key considerations for choosing UI development tools?
Developers need a UI framework tool that allows most of the development to be done quickly and directly within the tool through its automation of basic tasks. Standard design files such as Adobe Photoshop should be able to be pulled into the tool so that a working prototype UI can be seen on the target within minutes.
The tool should also help spread the workload for UI development across the entire team so members can collaborate more effectively, reducing the amount of time to develop the UI and reducing the total cost of development for the project. With integrated validation, testing, and prototyping tools, project teams should be able to perform much of the UI performance testing on their desktops and only deploy to the target hardware when they need to validate its performance on the actual device. With Storyboard, for example, this can be done locally or remotely and doesn’t require the UI to be compiled first, which simplifies and speeds the process compared to other approaches. The other advantage to this approach is that the UI is separate from the back-end logic that controls the device. That separation can help with certification and safety - even if something happens to cause the UI to fail, the device continues to work.
It’s important for developers to look for differences in how UI tools themselves were developed. Many tools rely on a propriety coding framework that’s based on C++, which can put designers back into situations similar to manual coding. The main difference between using those tools and 100% manual coding is that the UI development tool typically provides automatic development of some UI functions and the framework includes validation, testing, and prototyping tools. However, it still requires coding and compiling to optimise the UI for the embedded hardware, and compiling needs to be done each time it’s deployed to the hardware.