Accurate: Clinical evaluation of medical device software

Fußballplatz - Elfmeter - Fußball

In the context of the introduction of the MDR (Medical Device Regulation), the Medical Device Coordination Group (MDCG) regularly publishes recommendations which, based on the MEDDEV documents used to date, represent recommendations for action in dealing with the implementation of the MDR requirements.

In March of this year, the “MDCG 2020-1: Guidance on Clinical Evaluation (MDR) / Performance Evaluation (IVDR) of Medical Device Software” was published. The goal of this recommendation is to assist in determining and evaluating the adequacy of clinical data to demonstrate the safety and performance of medical device software. This applies to both the clinical evaluation of medical devices and the performance evaluation of in vitro diagnostic devices.

The guidance document distinguishes three types of medical device software, with corresponding implications for the collection and determination of clinical data:

Software with an independent intended purpose and propagated clinical benefit à clinical evaluation and performance assessment then also target only this software
Software with an intended purpose and propagated clinical benefit in relation to another medical device for a medical purpose à clinical evaluation and performance assessment are then aimed at both the software and the medical device
Software that interferes with the application and use of another medical device (software without its own independent intended purpose and propagated clinical benefit) à clinical evaluation and performance assessment are then aimed at the medical device including the software (software, however, only as a component or accessory)
Background for determination and evaluation of adequate clinical data in the context of the MDCG 2020-1 Guidance document are on the one hand Article 61 (1) of the MDR as well as Article 56 (1) of the IVDR.

Just as in the case of clinical evaluations for medical devices as well as performance evaluations of in vitro diagnostics themselves, the process for medical device software is an ongoing process that spans the entire life cycle of the software. This MDCG 2020-1 refers, in the case of software classified as medical devices or in vitro diagnostics, to the same basic principles for preparing clinical evaluations or performance assessments that are outlined and described in the corresponding guidelines and regulatory documents. These include, but are not limited to:

Establishing and maintaining a clinical evaluation/performance assessment plan.
Identifying all relevant data to demonstrate clinical safety and performance, and identifying and evaluating open issues with respect to those issues
Critically evaluating the data collected in terms of their quality and their contribution to the clinical evaluation/performance assessment
Analysis of the available data and their relevance in terms of demonstrating compliance with the essential safety and performance requirements
When compiling clinical data/evidence from medical device software, attention should be paid to 3 key components:

Scientific validity: measure of the consistency of the medical device software data output based on the inputs and algorithms with respect to the desired clinical use situation, intended purpose, indication.
Evidence can be provided by comparing existing clinical performance data with the state of the art or by generating new clinical performance data if a GAP analysis has shown the need for it.

Technical / analytical performance: accuracy, reliability and precision of the medical device software in generating the desired data output based on the input data.
Here, the evidence should be to provide objectively measurable results that demonstrate compliance of the medical device software specifications with the desired intended purpose and user needs/expectations.

Clinical performance: ability of the medical device software to provide clinically relevant data output related to the desired intended purpose.
In this case, according to MDCG 2020-1, the vendor must demonstrate that the medical device software has been tested for the intended purpose, target populations, conditions of use, environments of use, and with all user populations. This ultimately provides evidence that the user can expect clinically relevant data outputs when using the software.

The evaluation of clinical performance can be based on data of the medical device software itself as well as on equivalent software products. In the presentation and justification of equivalence, the same approach as in clinical evaluations of medical devices should be applied by and large, i.e. agreement with regard to demonstrable (technical) characteristics as well as the intended purpose and clinical application scenarios.

MDCG 2020-1 also provides guidance on how to assess the scope and quality of identified clinical evidence. This includes, on the question of scope, considerations such as: do the clinical data provide evidence on intended purpose, indications, contraindications, target populations, or have the clinical risks and analytical or clinical performance been mapped with the data? In qualitative terms, questions should be answered such as: was the collection of data in the source appropriate as well as do the data reflect current scientific knowledge?

If it is concluded that existing clinical data are insufficient to demonstrate safety and performance, the question of clinical data collection in a clinical trial inevitably arises. This may be the case with the fundamental modification of medical device software or completely new software products. In this case, the conception and design of the clinical trial/study may differ from that of classical medical devices. For example, in the case of software products that measure the efficiency of a treatment method, a prospective study design to demonstrate the safety and performance parameters should be planned as part of the clinical evaluation / performance assessment.

The level of evidence of the clinical data that must be provided to demonstrate safety and performance, as well as its adequacy, is determined by the characteristics and claim of the software, in accordance with MDCG 2020-1 with Article 61 (1) of the MDR as well as Article 56 (1) of the IVDR. For medical device software for which demonstration of conformity with the essential safety and performance requirements is not possible on the basis of clinical data, the manufacturer must present in its technical documentation well-founded reasons why, in its opinion, conformity can only be based on non-clinical data. The data basis should be conclusions from the risk analysis, which also include the state of the art as well as alternative diagnostic and therapeutic treatment methods. The presentation of the results and conclusions as well as the final risk-benefit assessment must still be presented in a clinical evaluation report.

This evaluation report must be updated at regular intervals. This includes data from post-market surveillance, i.e. feedback from the market, from users, reports on complaints, and also the results of any PMCF studies conducted after the product has been placed on the market. These specifications correspond to the requirements for clinical evaluations of medical devices / performance evaluations of in-vitro diagnostics; medical device software also offers, due to its connectivity, the possibility to a greater extent of collecting so-called “real world performance” data and evaluating it in the context of post-market surveillance. In this way, malfunctions or systematic misuse of the software, for example, can be recorded much more easily and transparently, which should also benefit the safety of the application of the medical device software.

In summary, three cornerstones of this guideline are of crucial importance, all three focus on the generation of valid data; furthermore, the overriding characteristic of this guideline is the orientation to provide a practice-oriented and appropriate approach.

Valid clinical context/scientific validity: Proof of the conformity of the data output of the medical device software based on the inputs and algorithms with regard to the desired clinical application situation, purpose, indication. The proof can primarily take place in a comparison of existing clinical data on the performance with the state of the art, which can be justified, for example, in the comparison with guidelines of scientific medical societies.
Technical and analytical performance: fulfillment of the basic and manufacturer’s intended criteria for the performance of the medical device software, demonstrated by comparing the input data with the output data in terms of the desired quality, reliability and precision.
Clinical performance: demonstrated by the ability of the medical device software to provide clinically relevant data output related to the desired intended purpose, the data output must demonstrably provide a positive benefit on the health of the user. Depending on the concept of the medical device software, positive impacts on, for example, public health are also conceivable.
The goal of the guideline is clear and already known to many manufacturers in the context of clinical evaluation in general. In order to also achieve a hit in one go and one shot when evaluating software, we are happy to support you in planning and creating your clinical evaluation / performance evaluation for software. No matter if it is for training, to keep up with the class or in the Champions League, seleon GmbH will be happy to assist you.

Please note that all details and listings do not claim to be complete, are without guarantee and are for information purposes only.