Blog

5 takeaways for human factors practitioners from the HFES Health Care Symposium FDA workshops

|
April 5, 2018
Pre-Symposium U.S. Food and Drug Administration (FDA) workshops have become the norm over the past few years at the annual International Symposium on Human Factors and Ergonomics in Health Care. This year, both Center for Drug Evaluation and Research (CDER) and Center for Devices and Radiological Health (CDRH) were represented.

As in previous workshops, a significant portion of the content presented was a summary of various FDA guidance documents related to the application of human factors engineering to the development of medical and drug-delivery devices. While the bulk of that information is publicly available, it is valuable to take advantage of the forum and get some clarification and elaboration directly from the agency on current human factors (HF)-related topics.

As we have come to expect, there are some similarities in how CDER and CDRH expect human factors to be applied, but also some differences. This summary is simply my interpretation of key topics discussed, and is in no way meant to be comprehensive, nor an endorsed statement of FDA policy. While many of these points were couched in the context of designing, executing, and analyzing a human factors validation study, the implications apply more broadly.

Takeaway 1: User groups

When validating safe and effective use of a pre-filled syringe, relying on existing data for HCPs rather than including them alongside (e.g., adult patients) in the human factors validation test may be a viable option.

On both sides of the fence (CDER/CDRH), the importance of validating safe and effective use for all intended user groups was emphasized, as it has been many times before. CDRH cited failure to account for all intended user groups (and/or use environments) as one of the most common defects in human factors validation reports. CDER did not disagree with this common pitfall but did highlight one use case specific to combination products that may not require inclusion in human factors validation tests. For “standard” pre-filled syringes, health care practitioners (HCPs) (e.g., nurses) need to be acknowledged as a user group, but their inclusion in the actual HF validation test is likely not necessary given the well-documented practices and post-market data supporting this group’s safe and effective use of such products.

The point was made that of course not all pre-filled syringes are “standard” and that there are exceptions… but my takeaway was that when validating safe and effective use of a pre-filled syringe, relying on existing data for HCPs rather than including them alongside (e.g., adult patients) in the human factors validation test may be a viable option.

Takeaway 2: Critical tasks

While documenting all your human factors-related activities during product development is a must, for medical devices, those activities may not necessarily include a human factors validation study.

There exists a documented difference between the CDER and CDRH definitions of a critical task.

From CDER Draft Guidance:

Critical tasks are user tasks that, if performed incorrectly or not performed at all, would or could cause harm to the patient or user, where harm is defined to include compromised medical care.

From CDRH Guidance:

[A critical task is] A user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.

This is not a new difference, but it was discussed during the workshops, and CDRH emphasized that they were “serious about the word serious.” More interesting than the rehashing of this difference was a related discussion about how a manufacturer should handle validation of safe and effective use of their product if their use-related risk analysis (URRA) determines that no critical tasks exist for their product. There are several rabbit holes one can go down when considering this question, but one interesting difference between the CDER and CDRH responses did emerge.

From CDER’s perspective, even if there is no immediate harm associated with (e.g.) a failed injection, that failed injections may occur is still important, so “no critical tasks exist” is not an argument a manufacturer can make in support of a decision to not conduct a human factors validation study.

From the CDRH perspective, if you complete all of your preliminary analyses and determine that there are no critical tasks associated with use of your device, you still have to complete Sections 1-7 of your human factors engineering report, describing all the activities leading up to that determination. If the agency agrees with your determination, you would not be required to conduct a human factors validation study.

Regardless of whether your product is a drug/combination product or a medical device, documenting all your human factors-related activities during product development is a must. However, for medical devices, those activities may not necessarily include a human factors validation study.

Takeaway 3: Training and support in simulated environments

There is an increasing willingness on the part of the agency to consider realistic simulations of training and support mechanisms available to medical device and combination product users.

The point was still made that training all participants in a validation study, particularly when evaluating a product to be used by laypersons, is very often not representative of operational context of use, and therefore not acceptable. However, a few specific examples were given that indicate the FDA’s commitment to consider reasonable arguments for providing more realistic (first-time) use simulations:

Help line

Historically, incorporating the opportunity for a research participant in a validation study to call a (simulated) help-line to get the support they need to complete a task has been a contentious practice. In my experience, this is mostly because it has been implemented with varying degrees of care. Simulating a help line by having the “support representative” role played by an observer directly behind the glass is not realistic, and likely to be met with justifiable criticism. However, the FDA did grant that when simulated properly, this can be a reasonable simulation of operational context of use. In these cases, a participant in a validation study independently choosing to call the help line and receiving the support they need to complete a task successfully does not necessarily constitute a critical task failure (though it likely constitutes a difficulty that would require further analysis).

Online support

It was encouraging to hear the agency acknowledge that online resources are increasingly becoming the first point of reference when users of a product (medical device or otherwise) need a tutorial. Whether a product website, online manual, or YouTube demo – these resources are very much a part of operational context of use for a significant portion of the population. As with the help line, the way in which these resources are incorporated into the simulated use environment is key to validity of the resulting data. “Here’s an injection device, go watch this YouTube video and then attempt a simulated injection…” not realistic, and likely to be met with justifiable criticism. But implementing protocols to understand how a specific research participant tends to learn about using a new injection device and making them aware that all those resources are available to them as a part of the research they are participating in can be a reasonable approach. It was good to hear the FDA acknowledge that a case can be made for this.

Train the trainer

Consistent with the underlying message in the previous two examples, when discussing the extent to which it is necessary to implement “train the trainer” protocols in simulated-use research, the FDA stressed the importance of achieving a reasonable simulation of operational context of use. For some types of devices, this question becomes less important than the question of whether any degree of consistent training can even be expected, in which case the data resulting from a trained arm of a study may be all but disregarded in favor of an assessment of untrained use. But for other types of devices, use without some degree of training is simply not realistic. In these cases, how the training is implemented becomes more important. If, for example, the expected practice is that groups of clinicians receive one in-service training from a manufacturer representative and then use the device themselves in a clinical setting and/or train patients on how to use the device themselves, then this is the sequence of events that should be simulated for validation of safe and effective use.

Takeaway 4: Communication with the agency

Of every 100 submissions that the CDRH human factors team receives to review, where they were NOT provided an opportunity to review the HF validation protocol in advance, maybe one of them makes it through without a request for additional information.

A common message from CDER/CDRH in just about every public forum I have heard them speak – communicate with us early and often. This can be frustrating for manufacturers to hear because for some types of interactions the response time from the agency is not as quick as they would like, but not every interaction has to be a full-fledged meeting and review. There are guidance documents available describing the various mechanisms by which the agency can be engaged (e.g., Guidance Document UCM311176) and given that some of the most important questions may be answered in a more formal meeting, that stresses the importance of having a robust human factors plan in place long before you’re gearing up for your validation study.

CDRH cited a high degree of communication with the agency as a key best practice, and backed this up with an observation that of every 100 submissions that the human factors team receives to review where they were NOT provided an opportunity to review the HF validation protocol in advance, maybe one of them makes it through without a request for additional information (e.g., another validation study).

Takeaway 5: Digital Health Software Precertification Program

The Digital Health Software Precertification Program represents an opportunity to incentivize not just the proper execution of a human factors validation study, but the advancement of safer and more effective medical devices (including software) through institutionalization of best practices in human factors engineering.

As a follow up to our previous piece on digital therapeutics, we started exploring the extent to which the Digital Health Software Precertification Program might in the future impact the regulation of medical and drug-delivery devices, or at least the software-based contingents of those devices. This is a very large and multi-faceted initiative coming out of CDRH, that seeks to “develop a tailored approach toward regulating digital health and software technologies. The new approach aims to look first at the software developer and/or digital health technology developer, rather than primarily at the product, which is what we currently do for traditional medical devices.”

The same, or at least very similar, topic has come up more and more frequently in recent years when considering how to assess and regulate the application of human factors engineering to the development of software as a medical device (or at least the software components of medical and drug-delivery devices). Given this, I expected to hear at least a mention of collaboration with the CDRH human factors team when listening in on the FDA-hosted public workshop focused on this program earlier this year. I was surprised to hear no such mention, and then thought maybe during the HFES workshops the CDRH human factors team would be able to share some insight to how human factors is being considered in the institutional precertification of developers and manufacturers. Unfortunately, the CDRH team was not able to share any insights to this when asked during the workshop (nor was CDER).

I hope that the Digital Health Software Precertification Program leadership has actively engaged their in-house human factors experts, and that the human factors team’s inability to share insights was simply due to some limitation in their ability speak publicly on the topic. If that is not the case, it would seem a missed opportunity to incentivize not just the proper execution of a human factors validation study, but the advancement of safer and more effective medical devices (including software) through institutionalization of best practices in human factors engineering.

Did you attend the Pre-Symposium FDA workshops? What additional takeaways do you have? Let’s talk!