7 insights from the 2019 HFES Health Care Symposium

7 insights from the 2019 HFES Health Care Symposium

By Korey Johnson and Jackie Ulaszek

The Human Factors and Ergonomics Society (HFES) hosted its annual healthcare symposium last week in Chicago. Our team was out in force and had a great time connecting and reconnecting with friends and colleagues around the industry. If you missed the conference, in addition some great content, you also missed your chance to test your claw machine skills with us.

Aside from having fun, we also had the opportunity to hear about some great work that our human factors colleagues are doing around the industry, and present some of our own work. We attended many sessions and heard good discussion around FDA and regulatory perspectives, use of technology in research, interesting research methods employed, and cross-industry collaboration, among others.

Here are a few of the most interesting and useful insights our team brought back from the conference, most from the medical and drug delivery device track, but some general insights as well. Thanks to those responsible for the contributions summarized below, as well as those who made contributions elsewhere!

Statistics on HFE submission success rate

(From Pre-Symposium WK1: Improving the Safety & Effectiveness of Medical Devices Through Human Factors Considerations; MDD8 – CDRH/CDER FDA PANEL)



Note the common HFE deficiencies in submissions to help plan realistic timelines for human factors related efforts during development.

During the Center for Devices and Radiological Health (CDRH) workshop, the Human Factors Pre-Market Evaluation Team (HFPMET) provided the usual overview of the FDA organization and regulatory review pathways. They also provided some hypothetical case studies to point out common Human Factors Engineering (HFE)-related things that go wrong in submissions. According to the HFPMET team, some of the more common deficiencies in submissions from an HFE perspective that require at least one request for additional information (and therefore delay the development process) included:

  • Incomplete / missing data for user groups or use environments (i.e., insufficient exploratory research or documentation thereof)
  • Incomplete / missing analysis of known use issues (i.e., insufficient post-market surveillance or documentation thereof)
  • Critical tasks not clearly identified, or not all critical tasks assessed (i.e., insufficient use-related task/risk analysis or documentation thereof)
  • Insufficient linkage between use-related risk analysis (URRA), critical task definition, and task success criteria
  • Reporting only performance rates and/or using preference ratings as a component of acceptance criteria

This year, the CDRH team went a step further and provided some statistics around HFE submissions. Of 184 510(k) HF/UE reviews in 2018, only 21 made it through without requests for additional information from the HFPMET reviewer. Of 157 Q-Subs, only 6 resulted in agreement on a human factors validation protocol. Not surprisingly, these 6 are all included in the 21 submissions that did not require additional information. These types of statistics can be useful to highlight the importance of planning for realistic timelines around human factors-related efforts during development, and we are looking forward to seeing more of this type of reporting from the FDA in the future.

Interesting regulatory and legal perspectives




Applicants in the FDA’s Pre-Cert Program for SaMD will be assessed on the extent to which they have robust human factors processes in place, and the FDA is seeking input from industry to help define how a manufacturer can demonstrate this!



Trying to design to avoid litigation makes no sense – litigation will happen regardless. But acknowledging this eventuality can serve as another arrow in the quiver for human factors practitioners vying for the resources to document a robust human factors effort during product development.



The general message for the largely US-based audience from the EU-based speaker regarding the new EU MDR was to keep calm and carry on.

There were a few panels and presentations that highlighted forward-looking regulatory / legal perspectives.

In one Bold Insight-led panel, human factors and regulatory experts from industry and the FDA described the Software As a Medical Device (SaMD) Pre-Certification Program, focusing largely on the Excellence Appraisals that will be required in order for a manufacturer of SaMD to become pre-certified. The panelists addressed the question – What does “good human factors” look like from an organizational perspective and what types of processes and metrics can manufacturers present during an Excellence Appraisal to demonstrate a culture of quality and excellence as it pertains to human factors? The panel was intended to be informational for those unfamiliar with the Pre-Cert Program, as well as a call to action for the entire HF community to contribute their examples of organizational processes and metrics directly to the FDA docket for the program. (This is a topic we have been following closely; read one of our past posts, FDA’s digital health precertification program emphasizes importance of post-market surveillance, and find more information available about the program on the FDA’s website.)

In another Bold Insight-led panel, instructional material was discussed from the perspective of human factors experts contributing to the design of the materials, and the perspective of litigators and expert testimony from the HF domain. The intent of the panel was to provide the predominantly development-focused attendees with the novel perspective of how documented human factors efforts during development can help protect a manufacturer from liability in future litigation. The attendees were presented with a number of fascinating (and in most cases, tragic) examples of cases where design for safe and effective use was called into question during litigation. The take home message was that trying to design to avoid litigation makes no sense – litigation will happen regardless. But acknowledging this eventuality can serve as another arrow in the quiver for human factors practitioners vying for the resources to document a robust human factors effort during product development.

Finally, an update was provided on the new EU Medical Device Regulation (MDR). The general message for the largely US-based audience from the EU-based speaker was – keep calm and carry on. The new MDR is largely bringing medical device regulation in the EU into more alignment with how the FDA regulates medical devices in the US, at least from a human factors perspective. Under the new regulation, “The new rules significantly tighten the controls to ensure that medical devices are safe and effective and at the same time foster innovation and improve the competitiveness of the medical device sector” (http://europa.eu/rapid/press-release_MEMO-17-848_en.htm). This initiative will be enforced starting in 2020, with all medical products needing to be recertified by 2024 at the latest.

Use of VR in research




The industry is increasingly incorporating VR and/or AR into device prototyping and research deliverables with positive results.

While virtual reality (VR) has been leveraged in many industries over the past several years, including healthcare, it is gaining traction not just as a tool to be used in the field, but also one to be used during research and development. Many universities are utilizing “mixed reality”, a combination of augmented reality (AR) and VR, as an advanced training tool. For example, overlaying patient data, such as the progression of a gunshot wound over time, on top of a physical mannikin and viewing these injuries through a headset while performing the physical actions on the mannikin. In research and development, device prototyping software can be easily integrated into a VR program to conduct expert evaluations and comparative analyses, as well as understand spatial relationships and context of use without the costly expense of physical prototypes.

Lastly, for contextual research, a demonstration was provided in one session of how stakeholders can be provided with a more impactful deliverable than a standard research report. By leveraging VR, stakeholders can review insights generated by the research as they virtually walk through the research conducted, jumping to physical locations to review research findings tagged to physical objects in the environment.

Training decay

(From MDD5 – DEVICE TRAINING PROGRAMS; Many posters, and others.)



As an industry, we are making progress towards a more consistent and practical approach to adequately simulating memory decay in operational context of use without unnecessarily burdening research timelines and budgets. But we still have a ways to go.

Of course, training decay was a popular topic in both the workshops and panels with the FDA, as well as in several other sessions and posters. Many perspectives were presented, including:

  • The cost associated with extended decay periods that may not be necessary.
  • The need to more effectively leverage seminal research such as the Ebbinghaus forgetting curve, as well as the extremely large body of memory-related research that has been conducted since the 1880’s in finding a reasonable solution for adequately simulating extended decay periods.
  • Primary research that is currently being conducted with support from the FDA to assess the extent to which performance decrements exist for variable decay periods in the context of training specifically for medical device use.

There was quite a bit more presented and discussed on this topic. We are making progress, and hopefully some of the ongoing initiatives will give us some recognized empirical basis for decay periods in medical device human factors research that adequately simulate operational context of use without unnecessarily burdening development timelines and budgets.

Cross-industry collaboration



While there seems to be increasing acknowledgement of the value that a strong human factors presence within hospitals and clinical networks offers, there were few solutions to overcome the budgetary constraints that often stifle this effort.

There seemed to be more discussion at this symposium around efforts to increase human factors presence within hospitals and clinical networks. Though many hospitals and clinics have recognized the importance of including human factors in their processes and procedures, many barriers for complete integration still exists. Both HF professionals and clinicians agree that what the other does is important but neither fully understands the ins and outs of the other role, making it challenging to find common ground for measuring improvement.

Hospitals with an HF champion have had the most successful integration; however, the role and impact of change may be limited depending on the champion’s leadership role in the organization. Even for hospitals that have the stakeholder buy-in and access to HF professionals, many times the HF expert or team is stretched thin across engagements and cannot attend to on-site demands.

Lastly, after all the above is considered, HF personnel working in hospitals most often do not have the budget necessary to seek external support when demands on the HF function are high. Most of the discussions we participated in and overheard throughout the symposium were focused on the recognition that this is an area for improvement, as opposed to discussions around solutions. However, there were a few promising cases where it seems that the right relationships have been formed to really get buy-in for human factors from a hospital administration perspective – hopefully this becomes more of a trend.

On the cross-industry collaboration front, hfMEDIC continued to gain momentum and interest as it organizes under the NSF’s Industry-University Cooperative Research Centers (IUCRC) Program. There are several pre-competitive research opportunities that routinely emerge at this conference and others, for which a vehicle such as hfMEDIC could be extremely valuable. We hope to support the development of this program and work with the existing members to drive research forward that will be of value to the industry as a whole.

What did you take away from the symposium?

These are just a few of the topics that stood out to our team, and there were many others! These also sparked discussion amongst our team:

  • Attention was drawn to the inadequacy of currently available ergonomic and biomechanical normative data for use in the design of medical devices (e.g., for specific patient populations).
  • Deception was used in a study presented to enhance the fidelity of simulated use, which sparked some interesting conversation within our team.
  • Some very interesting case studies were presented where stress was simulated with a high degree of fidelity.

And more! Let us know what your key takeaways were!

Recruiting methods and study logistics for human factors and user research

Recruiting methods and study logistics for human factors and user research



A stronger recruiting strategy that includes relationships with patient support groups and clinical treatment centers can provide better access to difficult-to-reach patient populations. Being intentional about how you plan the logistics of your human factors and user research can mitigate risks to validity introduced by biases.

Minimizing bias and mitigating risks to validity

Earlier this month we attended the 4th Annual Human Factors Excellence for Medical Device Design. We had a great time connecting with clients and colleagues (and gave away some awesome cooler backpacks). We also enjoyed the conference topics ranging from tactical examples of applying human factors engineering to the development of medical devices, to overcoming common pitfalls, and even best practices to institutionalize human factors and design thinking within an organization.

While many discussions resonated with me, there was one that I wanted to amplify – Strengthening Strategies for User Research through Consideration of User Groups (Tina Rees, Associate Director – Human Factors, Ferring Pharmaceuticals). In this presentation, Tina raised a number of reasons why more attention should be paid to who is recruited for user research and how those participants are recruited. Among those reasons, two stood out to me the most:

  1. The need to consider alternative recruiting methods for very specific or difficult to reach populations.
  2. The need to at least acknowledge, if not mitigate, the limitations sometimes associated with conducting user research exclusively during business hours.

Alternative recruiting methods

Increasingly, we are conducting research with populations who are difficult to identify or reach, and for whom special consideration is necessary. As research with these populations has become more common for us over the years, we have had to adapt and forge relationships with different types of patient support groups and clinical treatment centers to maintain access to the “right” kind of research participants.

Historically, and outside of the healthcare and medical device human factors space, relying solely on “traditional” market research recruiting firms to source research participants has been sufficient to find various types of end users. However, as healthcare-related products and devices are increasingly targeted to very specific use cases, rare disease patient populations, and populations with perceptual, cognitive, or physical limitations, the limits of traditional market research participant databases are tested. Forming partnerships directly with clinical sites has, in our experience, been the best approach to overcome the challenge of accessing these difficult-to-access populations. What’s more, these partnerships have provided us with access to clinicians who can offer the support needed by certain patient populations throughout the research process.

This is not to say that these partnerships can’t be formed between traditional market research recruiting firms and clinical / patient support groups – and there are some firms out there who have specialized in recruiting various patient and clinician populations – but the astute human factors practitioner will ensure that whenever the risk of not finding enough of the “right” kind of research participant is high, the appropriate relationships are in place to facilitate access.

Conducting user research during business hours

Most of the time, when we conduct user research we are doing so during normal business hours (from 9a-5p, Monday – Friday). This is usually practical, and in some cases, it is absolutely necessary (e.g., ethnography or contextual inquiry conducted in the field when end-users use a product during the course of their normal workday). However, it does make for a very clear selection bias when conducting lab-based simulated-use research. If you only plan to conduct this research during normal business hours, you are effectively adding “has no conflict during business hours (i.e., potentially unemployed), or is willing to take multiple hours off from work to participate in research” to your inclusion criteria. (Disclaimer – I am a consultant. My teams are the ones regularly tasked with planning, management, and execution of human factors and user experience research, and we do not want to be regularly conducting research on evenings and weekends either.)

I am not advocating for a wholesale shift to conducting research outside of business hours. That is neither practical or necessary. What I am advocating for is to consider the impact that this selection bias may have on the external validity of research data. In many cases, the impact may be negligible. There may not be any anticipated differences in use-related behavior for those with more flexible work schedules and those not likely to participate during working hours. In other cases, the impact could potentially be significant. If the product for which you are conducting research is intended to be used by (e.g., among others) practicing Rheumatologists – what risk does limiting participation to normal business hours pose, and how can that risk be mitigated?

Some key factors typically within our control when we plan research include: the days of the week, hours of a day, and location in which we conduct the research, as well as how we incentivize the end users to participate. Depending on the sample size required for research, offering a few time slots earlier or later in the day during the normal work week may be sufficient to mitigate selection bias and improve the external validity of your data. Particularly if you plan to conduct the research at a site nearby and convenient to a clinical site with ample Rheumatologists (for example). Further, consider the incentive you offer participants, and whether it is enough to encourage participation from those typically reluctant to take time off from work.

A note on bias and validity

In collecting these thoughts on alternative recruiting methods and limiting research to business hours, the concepts of bias and validity are recurrent throughout. Particularly selection bias and external validity. It is worthwhile to consider the potential limitations of human factors research for medical devices as it is typically conducted. When one considers all the possible biases and threats to validity inherent in research with human subjects, these things can start to add up (Image below).

I make this point to discourage cutting too many corners when stakes are high – for example, in a human factors validation study. There are many variables at play when a validation study is planned, and sadly (but understandably in some cases) robust research design is not always deemed the most important. When, in the name of budgets and timelines, we sacrifice adequate representation of distinct user groups, a logistical plan that minimizes selection bias, or sufficient investment in realistic use scenarios – we introduce bias and detract from the validity of any conclusions we draw. Those who understand where the “minimum of 15 representative users from each distinct user group” requirement comes from know that from a statistical reliability perspective that really is the bare minimum to be confident you have provided ample opportunity for major use-related issues to present themselves for each use case, assuming robust research design. If, on top of going with the bare minimum from a sample size perspective for validation, you also “economize” your research design in one or more of the above ways, you are increasing the chances that your conclusions will be based on invalid data.

Bottom line – be as intentional about the logistical planning for your research as you are about the development of research materials and planned analyses because these logistics can have just as much impact on the validity of your conclusions!

FDA’s digital health precertification program emphasizes importance of post-market surveillance

FDA’s digital health precertification program emphasizes importance of post-market surveillance



Finding efficient ways to leverage post-market surveillance data to inform product development at an institutional level will be key for SaMD manufacturers seeking FDA precertification.

Post-market surveillance is already an important component of human factors analyses that inform the development of any safe and effective medical device. For manufacturers of Software as a Medical Device (SaMD) that elect to participate in the FDA’s Digital Health Software Precertification Program, post-market surveillance data will become even more important. SaMD in many cases presents opportunities for a manufacturer to monitor Real World Perfomance Data (RWPD) directly via data recorded by the SaMD itself, but analysis of the events catalogued in publicly available adverse event databases remains a key source of post-market surveillance. The FDA’s MAUDE database can be complicated and time-consuming to mine. To make the most of this data, manufacturers would benefit from having a more efficient means by which to extract meaningful insights.


By implementing the pre-certification program, the CDRH at FDA hopes to establish a regulatory model that will “assess the safety and effectiveness of software technologies without inhibiting patient access to these technologies.” Under the FDA’s program, manufacturers of SaMD who successfully obtain precertification can benefit from streamlined premarket review, and, in cases of some lower-risk software or certain types of modifications, can bypass FDA review all together.

In order to allow for streamlined (or sometimes bypassed) premarket review, but still achieve its mandate to ensure the safety and efficacy of medical devices, the FDA must verify through the pre-certification program that a manufacturer has established a “robust culture of quality and organizational excellence” and is “committed to monitoring real-world performance of their products once they reach the U.S. market.” In their working model of the pre-cert program, the FDA outlines the key components to the program (Figure below taken from Developing a Software Precertification Program: A Working Model).


From a human factors perspective (and likely from other perspectives), the details of the excellence appraisal and certification are critical to the value and the success of the program itself. The FDA is working with the pilot participants in the program to define those details, which will be centered around five key principles:

  1. Product Quality
  2. Patient Safety
  3. Clinical Responsibility
  4. Cybersecurity Responsibility
  5. Proactive Culture

One can hope that sound human factors practices (e.g., documented institutionalization of getting feedback from end users early and often, and grounding design decisions in this feedback) are threaded throughout the assessment of an organization against these principles.

Post-market surveillance data is key

Real World Performance Data (RWPD, i.e., post-market surveillance data), and the extent to which regular monitoring of this data to is relied upon to inform continuous improvement, is an important component of the pre-certification program. With a streamlined (or bypassed) regulatory review, implementing a robust process by which RWPD is constantly monitored, analyzed, and acted upon will be critical to manufacturers’ demonstration to the FDA that their organization is committed to producing SaMD that is both safe and effective.

RWPD includes three key components in and of itself:

  1. Real World Health Data (RWHD)
  2. User Experience Data (UXD)
  3. Product Performance Data (PPD)

In many cases, software by its very nature presents opportunities for monitoring RWPD that require minimal effort to at least obtain the data on the part of the manufacturer. For example, error reports that are automatically sent to a manufacturer from a software program can be monitored for reliability analysis. In other cases, where data are (at least for now) generated externally, such as is the case with adverse event and product complaint reporting, more effort is required to obtain RWPD.

The data are made publicly available in one way or another – whether formally via databases such as the FDA MAUDE or FAERS databases, or informally via other channels such as social media. In both cases, independent of the effort required to obtain the data, a manufacturer of SaMD seeking precertification through this program must have a robust process in place to collect, analyze, and act on this data; any medical device manufacturer, for that matter, should also have this process in place. As is increasingly the case in the information age, the problem becomes not how to obtain data, but how to manage the incredible amount of data available.

Making post-market surveillance data easier to consume

While there are multiple sources for post-market surveillance data, the MAUDE database remains the more relevant “official” database of publicly available AE/PC data relevant for SaMD manufacturers. Unfortunately, the MAUDE database is not very accessible and requires a database engineer to plumb the depths of the data before any meaningful analysis can be conducted.

That said, with the right skills brought to bear, there is a wealth of information available in MAUDE for SaMD manufacturers to leverage in their efforts to ground their designs in RWPD. This seems to be an opportunity for a subscription service that regularly summarizes trends in AE/PC data from the MAUDE database and other sources. The MAUDE database in particular is not without its limitations, several of which are stated very clearly on the FDA website itself, but as long as it is the official database of AE/PC data for medical devices it would seem beneficial to have easy access to emerging and historical trends in this data. This way, medical device manufacturers in general could be better able to institutionalize the incorporation of robust post-market surveillance data analysis into their design process, and SaMD manufacturers in particular could better prepare themselves for precertification and streamlined regulatory approval process in doing so.


Related post: The time for a better UX in digital therapeutics is now

5 takeaways for human factors practitioners from the HFES Health Care Symposium FDA workshops

5 takeaways for human factors practitioners from the HFES Health Care Symposium FDA workshops

Pre-Symposium U.S. Food and Drug Administration (FDA) workshops have become the norm over the past few years at the annual International Symposium on Human Factors and Ergonomics in Health Care. This year, both Center for Drug Evaluation and Research (CDER) and Center for Devices and Radiological Health (CDRH) were represented.

As in previous workshops, a significant portion of the content presented was a summary of various FDA guidance documents related to the application of human factors engineering to the development of medical and drug-delivery devices. While the bulk of that information is publicly available, it is valuable to take advantage of the forum and get some clarification and elaboration directly from the agency on current human factors (HF)-related topics.

As we have come to expect, there are some similarities in how CDER and CDRH expect human factors to be applied, but also some differences. This summary is simply my interpretation of key topics discussed, and is in no way meant to be comprehensive, nor an endorsed statement of FDA policy. While many of these points were couched in the context of designing, executing, and analyzing a human factors validation study, the implications apply more broadly.

Takeaway 1: User groups



When validating safe and effective use of a pre-filled syringe, relying on existing data for HCPs rather than including them alongside (e.g., adult patients) in the human factors validation test may be a viable option.
On both sides of the fence (CDER/CDRH), the importance of validating safe and effective use for all intended user groups was emphasized, as it has been many times before. CDRH cited failure to account for all intended user groups (and/or use environments) as one of the most common defects in human factors validation reports. CDER did not disagree with this common pitfall but did highlight one use case specific to combination products that may not require inclusion in human factors validation tests. For “standard” pre-filled syringes, health care practitioners (HCPs) (e.g., nurses) need to be acknowledged as a user group, but their inclusion in the actual HF validation test is likely not necessary given the well-documented practices and post-market data supporting this group’s safe and effective use of such products.

The point was made that of course not all pre-filled syringes are “standard” and that there are exceptions… but my takeaway was that when validating safe and effective use of a pre-filled syringe, relying on existing data for HCPs rather than including them alongside (e.g., adult patients) in the human factors validation test may be a viable option.

Takeaway 2: Critical tasks



While documenting all your human factors-related activities during product development is a must, for medical devices, those activities may not necessarily include a human factors validation study.
There exists a documented difference between the CDER and CDRH definitions of a critical task.

From CDER Draft Guidance:

Critical tasks are user tasks that, if performed incorrectly or not performed at all, would or could cause harm to the patient or user, where harm is defined to include compromised medical care.

From CDRH Guidance:

[A critical task is] A user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.

This is not a new difference, but it was discussed during the workshops, and CDRH emphasized that they were “serious about the word serious.” More interesting than the rehashing of this difference was a related discussion about how a manufacturer should handle validation of safe and effective use of their product if their use-related risk analysis (URRA) determines that no critical tasks exist for their product. There are several rabbit holes one can go down when considering this question, but one interesting difference between the CDER and CDRH responses did emerge.

From CDER’s perspective, even if there is no immediate harm associated with (e.g.) a failed injection, that failed injections may occur is still important, so “no critical tasks exist” is not an argument a manufacturer can make in support of a decision to not conduct a human factors validation study.

From the CDRH perspective, if you complete all of your preliminary analyses and determine that there are no critical tasks associated with use of your device, you still have to complete Sections 1-7 of your human factors engineering report, describing all the activities leading up to that determination. If the agency agrees with your determination, you would not be required to conduct a human factors validation study.

Regardless of whether your product is a drug/combination product or a medical device, documenting all your human factors-related activities during product development is a must. However, for medical devices, those activities may not necessarily include a human factors validation study.

Takeaway 3: Training and support in simulated environments



There is an increasing willingness on the part of the agency to consider realistic simulations of training and support mechanisms available to medical device and combination product users.
The point was still made that training all participants in a validation study, particularly when evaluating a product to be used by laypersons, is very often not representative of operational context of use, and therefore not acceptable. However, a few specific examples were given that indicate the FDA’s commitment to consider reasonable arguments for providing more realistic (first-time) use simulations:

Help line

Historically, incorporating the opportunity for a research participant in a validation study to call a (simulated) help-line to get the support they need to complete a task has been a contentious practice. In my experience, this is mostly because it has been implemented with varying degrees of care. Simulating a help line by having the “support representative” role played by an observer directly behind the glass is not realistic, and likely to be met with justifiable criticism. However, the FDA did grant that when simulated properly, this can be a reasonable simulation of operational context of use. In these cases, a participant in a validation study independently choosing to call the help line and receiving the support they need to complete a task successfully does not necessarily constitute a critical task failure (though it likely constitutes a difficulty that would require further analysis).

Online support

It was encouraging to hear the agency acknowledge that online resources are increasingly becoming the first point of reference when users of a product (medical device or otherwise) need a tutorial. Whether a product website, online manual, or YouTube demo – these resources are very much a part of operational context of use for a significant portion of the population. As with the help line, the way in which these resources are incorporated into the simulated use environment is key to validity of the resulting data. “Here’s an injection device, go watch this YouTube video and then attempt a simulated injection…” not realistic, and likely to be met with justifiable criticism. But implementing protocols to understand how a specific research participant tends to learn about using a new injection device and making them aware that all those resources are available to them as a part of the research they are participating in can be a reasonable approach. It was good to hear the FDA acknowledge that a case can be made for this.

Train the trainer

Consistent with the underlying message in the previous two examples, when discussing the extent to which it is necessary to implement “train the trainer” protocols in simulated-use research, the FDA stressed the importance of achieving a reasonable simulation of operational context of use. For some types of devices, this question becomes less important than the question of whether any degree of consistent training can even be expected, in which case the data resulting from a trained arm of a study may be all but disregarded in favor of an assessment of untrained use. But for other types of devices, use without some degree of training is simply not realistic. In these cases, how the training is implemented becomes more important. If, for example, the expected practice is that groups of clinicians receive one in-service training from a manufacturer representative and then use the device themselves in a clinical setting and/or train patients on how to use the device themselves, then this is the sequence of events that should be simulated for validation of safe and effective use.

Takeaway 4: Communication with the agency



Of every 100 submissions that the CDRH human factors team receives to review, where they were NOT provided an opportunity to review the HF validation protocol in advance, maybe one of them makes it through without a request for additional information.
A common message from CDER/CDRH in just about every public forum I have heard them speak – communicate with us early and often. This can be frustrating for manufacturers to hear because for some types of interactions the response time from the agency is not as quick as they would like, but not every interaction has to be a full-fledged meeting and review. There are guidance documents available describing the various mechanisms by which the agency can be engaged (e.g., Guidance Document UCM311176) and given that some of the most important questions may be answered in a more formal meeting, that stresses the importance of having a robust human factors plan in place long before you’re gearing up for your validation study.

CDRH cited a high degree of communication with the agency as a key best practice, and backed this up with an observation that of every 100 submissions that the human factors team receives to review where they were NOT provided an opportunity to review the HF validation protocol in advance, maybe one of them makes it through without a request for additional information (e.g., another validation study).

Takeaway 5: Digital Health Software Precertification Program



The Digital Health Software Precertification Program represents an opportunity to incentivize not just the proper execution of a human factors validation study, but the advancement of safer and more effective medical devices (including software) through institutionalization of best practices in human factors engineering.
As a follow up to our previous piece on digital therapeutics, we started exploring the extent to which the Digital Health Software Precertification Program might in the future impact the regulation of medical and drug-delivery devices, or at least the software-based contingents of those devices. This is a very large and multi-faceted initiative coming out of CDRH, that seeks to “develop a tailored approach toward regulating digital health and software technologies. The new approach aims to look first at the software developer and/or digital health technology developer, rather than primarily at the product, which is what we currently do for traditional medical devices.”

The same, or at least very similar, topic has come up more and more frequently in recent years when considering how to assess and regulate the application of human factors engineering to the development of software as a medical device (or at least the software components of medical and drug-delivery devices). Given this, I expected to hear at least a mention of collaboration with the CDRH human factors team when listening in on the FDA-hosted public workshop focused on this program earlier this year. I was surprised to hear no such mention, and then thought maybe during the HFES workshops the CDRH human factors team would be able to share some insight to how human factors is being considered in the institutional precertification of developers and manufacturers. Unfortunately, the CDRH team was not able to share any insights to this when asked during the workshop (nor was CDER).

I hope that the Digital Health Software Precertification Program leadership has actively engaged their in-house human factors experts, and that the human factors team’s inability to share insights was simply due to some limitation in their ability speak publicly on the topic. If that is not the case, it would seem a missed opportunity to incentivize not just the proper execution of a human factors validation study, but the advancement of safer and more effective medical devices (including software) through institutionalization of best practices in human factors engineering.

Did you attend the Pre-Symposium FDA workshops? What additional takeaways do you have? Comment below!

In-vehicle UX research: Here’s one recommendation that hasn’t changed in 10 years

In-vehicle UX research: Here’s one recommendation that hasn’t changed in 10 years



I found myself discussing what can be done to increase the extent to which voice recognition systems are seen as a benefit rather than an annoyance with the research sponsors, and I said the same things as I said 10 years ago… improve the system to support and recognize more natural speech patterns.

After recently conducting research on drivers’ experience with technology in their vehicles, I reflected on what has changed about that experience in the last 10 years…. and what hasn’t changed. The availability of customized, on-demand, and context-appropriate information in in-vehicle displays seems to be one of the biggest differences that drivers now benefit from, compared to their experiences 10 years ago. However, in-vehicle voice recognition was a particularly interesting topic during the research. In some cases, these systems have progressed considerably in the past decade such that they are useful now. But in many cases, I honestly couldn’t tell the difference between current voice interfaces and those of the late 2000’s.

What’s impacting drivers now

There are a lot of cool things on the horizon for the auto industry with autonomous vehicles gaining momentum, but I’m talking specifically about features equipped in late model mass market vehicles that are having a pretty big impact on a large portion of the population’s driving experience right now. The integration of customizable and context-sensitive safety, diagnostic and convenience information into instrument clusters, and even the heads-up display (HUD) has changed the typical driver experience.

In an affordable $20,000-$30,000 vehicle, drivers can get a technology and communications package that allows them to have a wealth of information at their fingertips. Diagnostic and safety sensors providing alerts whenever vehicle status is sub-optimal, lane departure warning alerts and driver attention (or rather, lack thereof) alerts, numerous cameras and sensors on the exterior of the vehicle to assist maneuvering in tight quarters, or just park the car for you, haptic indicators to orient attention to directional alerts, and of course, the ability to control connected media devices without ever taking your hands off the wheel. These are just a few of the things that might not be exactly revolutionary anymore, but are currently changing how mainstream drivers experience their vehicles.

HUDs seem to be on the border – they are not new in automotive application but have not yet achieved the mainstream deployment that you see for other technologies. HUDs can be valuable even if they are only capable of showing a driver their speed and the current speed limit for the road. When turn-by-turn navigational cues, safety alerts, and customizability are added in, the HUD feature starts to be more than just a ‘nice to have’.

A “real” voice system

With respect to voice recognition – these systems still leave much to be desired. I was a bit surprised to recently find myself having similar conversations with research participants about their experience using voice recognition systems as I had with research participants 10 years ago. There are some that HAVE gotten significantly better and allow for fairly natural speech patterns to be used when interacting with the system. But mostly, the systems still require clunky one- or two-word commands to precede any sort of action other than making a phone call.

Interestingly, now drivers have Amazon Echo, Google Home, and even Siri to use as a benchmark. They are even less willing to accept a system that doesn’t allow for them to converse at least as naturally they do when they use those systems. Ask drivers to compare their in-vehicle voice recognition systems to their experience with voice assistants and prepare for them to go on a diatribe about how cool it would be to have a “real” voice system in their vehicle to make commanding that system while driving easier.

Eventually, it will all come together. Personally, I am waiting for the day when I can say “Alexa, take me home.”

Pin It on Pinterest