Takeaway 1: User groups
When validating safe and effective use of a pre-filled syringe, relying on existing data for HCPs rather than including them alongside (e.g., adult patients) in the human factors validation test may be a viable option.
Takeaway 2: Critical tasks
While documenting all your human factors-related activities during product development is a must, for medical devices, those activities may not necessarily include a human factors validation study.
Critical tasks are user tasks that, if performed incorrectly or not performed at all, would or could cause harm to the patient or user, where harm is defined to include compromised medical care.
From CDRH Guidance:[A critical task is] A user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.
This is not a new difference, but it was discussed during the workshops, and CDRH emphasized that they were “serious about the word serious.” More interesting than the rehashing of this difference was a related discussion about how a manufacturer should handle validation of safe and effective use of their product if their use-related risk analysis (URRA) determines that no critical tasks exist for their product. There are several rabbit holes one can go down when considering this question, but one interesting difference between the CDER and CDRH responses did emerge. From CDER’s perspective, even if there is no immediate harm associated with (e.g.) a failed injection, that failed injections may occur is still important, so “no critical tasks exist” is not an argument a manufacturer can make in support of a decision to not conduct a human factors validation study. From the CDRH perspective, if you complete all of your preliminary analyses and determine that there are no critical tasks associated with use of your device, you still have to complete Sections 1-7 of your human factors engineering report, describing all the activities leading up to that determination. If the agency agrees with your determination, you would not be required to conduct a human factors validation study. Regardless of whether your product is a drug/combination product or a medical device, documenting all your human factors-related activities during product development is a must. However, for medical devices, those activities may not necessarily include a human factors validation study.Takeaway 3: Training and support in simulated environments
There is an increasing willingness on the part of the agency to consider realistic simulations of training and support mechanisms available to medical device and combination product users.
The point was still made that training all participants in a validation study, particularly when evaluating a product to be used by laypersons, is very often not representative of operational context of use, and therefore not acceptable. However, a few specific examples were given that indicate the FDA’s commitment to consider reasonable arguments for providing more realistic (first-time) use simulations:
Help line
Historically, incorporating the opportunity for a research participant in a validation study to call a (simulated) help-line to get the support they need to complete a task has been a contentious practice. In my experience, this is mostly because it has been implemented with varying degrees of care. Simulating a help line by having the “support representative” role played by an observer directly behind the glass is not realistic, and likely to be met with justifiable criticism. However, the FDA did grant that when simulated properly, this can be a reasonable simulation of operational context of use. In these cases, a participant in a validation study independently choosing to call the help line and receiving the support they need to complete a task successfully does not necessarily constitute a critical task failure (though it likely constitutes a difficulty that would require further analysis).
Online support
It was encouraging to hear the agency acknowledge that online resources are increasingly becoming the first point of reference when users of a product (medical device or otherwise) need a tutorial. Whether a product website, online manual, or YouTube demo – these resources are very much a part of operational context of use for a significant portion of the population. As with the help line, the way in which these resources are incorporated into the simulated use environment is key to validity of the resulting data. “Here’s an injection device, go watch this YouTube video and then attempt a simulated injection…” not realistic, and likely to be met with justifiable criticism. But implementing protocols to understand how a specific research participant tends to learn about using a new injection device and making them aware that all those resources are available to them as a part of the research they are participating in can be a reasonable approach. It was good to hear the FDA acknowledge that a case can be made for this.
Train the trainer
Consistent with the underlying message in the previous two examples, when discussing the extent to which it is necessary to implement “train the trainer” protocols in simulated-use research, the FDA stressed the importance of achieving a reasonable simulation of operational context of use. For some types of devices, this question becomes less important than the question of whether any degree of consistent training can even be expected, in which case the data resulting from a trained arm of a study may be all but disregarded in favor of an assessment of untrained use. But for other types of devices, use without some degree of training is simply not realistic. In these cases, how the training is implemented becomes more important. If, for example, the expected practice is that groups of clinicians receive one in-service training from a manufacturer representative and then use the device themselves in a clinical setting and/or train patients on how to use the device themselves, then this is the sequence of events that should be simulated for validation of safe and effective use.
Takeaway 4: Communication with the agency
Of every 100 submissions that the CDRH human factors team receives to review, where they were NOT provided an opportunity to review the HF validation protocol in advance, maybe one of them makes it through without a request for additional information.
Takeaway 5: Digital Health Software Precertification Program
The Digital Health Software Precertification Program represents an opportunity to incentivize not just the proper execution of a human factors validation study, but the advancement of safer and more effective medical devices (including software) through institutionalization of best practices in human factors engineering.
0 Comments