As part of the overall human factors effort associated with the development of a medical device, critical tasks1 must be determined and evaluated. To support testing of critical tasks related to a device, it is necessary to have a clear understanding of the hazards associated with use of the device and the level of risk associated with those hazards. We have seen multiple device manufacturers rely on a general Failure Mode and Effects Analysis (FMEA) to provide information about the hazards associated with a device and the level of risk associated with those hazards. From the FMEA, manufacturers then attempt to ‘back into’ user behaviors and determine which behaviors might cause the various hazards that were identified in the FMEA.
How the pieces fit together is important, but not everything
FMEA and other risk assessment methodologies typically focus on the functionality of product components. FMEA, for example, involves systematic review of a device, bolt by bolt, to evaluate ways the product may fail (i.e., failure modes) and what may happen if the product fails that way (i.e., effects). This focus on device components does not account for user behaviors and device interactions. While use of a device may result in device-component failures, it may also result in harm due to unintended, uninformed, or improper use. As a result, methods that do not evaluate users’ interactions with the device are typically insufficient for supporting a human factors analysis.
In my experience, I have identified several common challenges that result from component-focused risk analyses:
- Difficulty identifying risks that are based solely on use;
- Difficulty assessing the level of severity, and thus risk, associated with tasks;
- Difficulty defining and coding use-related critical tasks; all of which lead to
- Difficulty reporting on critical tasks.
More focus on behavior is needed
There are methods of risk analysis that focus specifically on behavior and on how people use products, loosely referred to as use-related risks analysis (URRA). A URRA identifies potential use-related activities a user may engage in that could result in risk to the user, risk to others, or risk of damage to the product. A good URRA also considers known problems with similar products on the market. However, before a URRA can be written, a task analysis must be completed to know the tasks and behaviors that are associated with device use (both intended use and reasonably foreseeable misuse).
Anecdotally, studies that have relied on a general FMEA rather than a URRA have been difficult to run, have experienced high numbers of unanticipated use errors, and were more likely to be converted from summative to formative.
There are multiple types of URRAs 2, including uFMEA (i.e., use failure mode and effects analysis). FMEA is still a great tool; the point is that a more specific focus on user behavior is needed. Ultimately, if your risk evaluation methodology includes 1) a task analysis which informs 2) a URRA, it will eliminate ambiguity related to defining and coding critical tasks. As a result, you are likely to experience fewer questions from FDA about how you determined and scored your critical tasks. Instead, the FDA can focus on the outcome of your testing.
1. Critical tasks, as defined by the FDA, are “user tasks which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.”
2. An example of a URRA is provided in Appendix B of the FDA / CDER draft guidance, “Contents of a Complete Submission for Threshold Analyses and Human Factors Submissions to Drug and Biologic Applications: Guidance for Industry and FDA Staff.”
Korey Johnson describes considerations for the FDA if in-person research continues to be impacted longer than anticipated, alternatives to in-person research, and reasons this research is considered essential.