Pharmafile Logo

AI in market access: silver bullet or spanner in the works?

By Emily Morton-West

- PMLiVE

You may be familiar with how artificial intelligence (AI) is revolutionizing drug discovery through identifying new targets and predicting the efficacy of different drug designs. Less attention has been given to the impact AI can have at the later stages of drug development and on patient access. This may in part be due to the scepticism surrounding AI, which is more pronounced when entrusting AI with patient data or decision-making that will directly affect patients.

Yet if the goal of market access is to ensure that patients receive timely and affordable access to healthcare products and services, how can we best serve patients if we don’t explore the possibilities of AI? This duality creates a unique tension in the industry, where the promise of AI’s efficiency is weighed against concerns about its reliability and accuracy.

We have identified six core pillars that represent areas with potential applications of AI in market access (see Figure 1). Here, we discuss the current capabilities of the AI tools available in each, separating the hype from the reality, and providing a balanced view of what AI can offer.

- PMLiVE

Figure 1. Potential applications of AI in market access

Clinical trial development

A common bugbear we hear from clients is that market access teams aren’t involved early enough in the design of clinical trials. It’s critical that endpoints are chosen with both regulatory and reimbursement bodies in mind, and currently the best way to gather feedback on trial design is through the early scientific advice (ESA) process, which must be initiated months in advance of protocol finalization. The lengthy ESA process can therefore be off-putting, especially when considering the number of different payers there are to satisfy, often with conflicting requirements.

Machine learning models (MLMs), trained on historical data, can make predictions about the acceptability of trial design to payers in minutes. This is based on identifying analogues, an approach that in the past has been completed manually. The same limitations will apply; analogues will never be a perfect match for the product, so there will always be caveats to the conclusions you can draw. Also, it’s worth noting that this will have limited use for Joint Clinical Assessment (JCA), an EU initiative to streamline the clinical assessment of health technologies, which will be introduced in phases from January 2025. For further information, read our blog post on opportunities for the use of AI in JCA here.

AI could also help to address patient recruitment issues by using clinical trial matching systems to pair eligible patients with trials or by refining trial design. AI tools can use electronic health records (EHRs) to simulate clinical trials with different inclusion criteria and predict the likely overall survival or response rate to determine which criteria are most important to patient population selection and remove those that restrict the population unnecessarily.

Another key issue affecting clinical trials is the high rate of patient drop-out, especially for trials with high data-collection demands. AI tools can also be used to predict the likelihood of patient drop-out and target those who rank highly with additional education to encourage them to participate for longer. However, these tools raise ethical concerns, as they could be used to limit recruitment to patients with a predicted low likelihood of drop-out. This could reduce the diversity of trial populations and limit the applicability of trial results to the real world.

The COVID-19 pandemic exacerbated issues around missing study data, with a greater number of patients missing scheduled site visits. While considering the burden on patients during study design can ameliorate this issue and improve patient retention, missing data is an inevitability. MLMs can be used to impute missing data by making predictions about a participant’s condition, restoring the true distribution of data from missing data sets more accurately than traditional missing data-processing models. While this is particularly true for data sets with high levels of missing data, care must be taken to avoid amplifying statistical or social bias in missing data imputation, which could exacerbate discrimination against certain groups or individuals in healthcare decision-making. Furthermore, acceptability of such approaches to payers or regulators is currently unclear.

Evidence generation

One major obstacle to the growing use of real-world evidence is the need for manual curation of EHRs or other patient data sources. This process is time-consuming, particularly due to the varying quality of patient data across different sources. Natural-language processing (NLP) tools can extract EHR data with up to 96% accuracy for characteristics such as disease stage or histology, achieving similar error rates to manual extraction (1). However, NLP tools struggle with results that are reported visually and cannot extract graphical data. Additionally, given inter- and intra-country variations in healthcare systems, tools must be retrained on local data to achieve the highest accuracy levels.

Another application is to generate synthetic patient data using an AI-based generative model trained on historical patient data. This has particularly exciting applications for rare diseases where single-arm trials are common due to recruitment constraints. While the use of external control arms is an imperfect solution, rather than relying on historical data, AI tools can create a digital twin for each trial participant and predict disease progression. Treatment effects can then be determined by comparing each participant with their digital twin (2).

There have been challenges around data privacy because patient data must be anonymized before being fed into the AI tool and several reidentification attempts have been successful. Anonymization degrades the data, but a balance must be struck to protect patient privacy.

Evidence synthesis

Evidence synthesis has been a target for AI automation due to the repetitive nature of many steps involved. NLP-enhanced decision support systems are already used to highlight key terms for abstract screening or to classify articles by study design, like Cochrane’s validated tool for identifying randomized controlled trials. Further automation approaches using supervised machine learning (ML) to infer inclusion and exclusion rules have greater application for targeted literature reviews than systematic literatures reviews, as those with the greatest accuracy still result in a 5% reduction in recall compared with manual screening (3). A halfway approach uses one manual screener and a bidirectional encoder representations from transformer (BERT) algorithm as a second reviewer with a human reviewer responsible for resolving any conflicts.

Experiments using generative pre-trained transformer (GPT) tools to create search strategies have produced seemingly plausible results but they can contain fabricated controlled vocabulary that is difficult to identify to the inexpert eye. A better approach may be to develop the initial search strategy and then use an automatic query expansion algorithm to refine the search strategy. This may be through synonym expansion, replacing contractions or abbreviations with expanded terms more likely to be used in published work or through identification of additional Medical Subject Headings (MeSH) terms.

Despite the relatively high accuracy achieved by NLP tools in the extraction of EHR data mentioned above, the reported accuracy of such tools in data extraction from literature sources ranges widely from 41.2 to 100.0% (4). This reflects the complexity associated with many prompts common to literature reviews, such as inclusion criteria, odds ratios or 95% confidence intervals. NLP tool-based data extraction could therefore be used as starting point, but all fields will need to be checked.

Dossier submissions

One of the more obvious uses of AI in market access is populating templates for reimbursement submissions from a reference source such as a global value dossier. Reimbursement submissions are far more than just a box-ticking exercise and involve a lot of strategic planning. Again, AI can be leveraged to support strategy development, using advanced data mining to gather insights based on past industry-wide experience and information on policy and process changes.

The National Institute for Health and Care Excellence (NICE) has recently released guidance on the use of AI in submissions that highlights the need for transparency around AI methodology and encourages engagement with NICE prior to submission. All AI methods should undergo technical and external validation and should be used to augment rather than replace human involvement (5).

A recent review identified 11 Health Technology Assessment (HTA) reports that mentioned using AI/ML methods, with the majority using AI to validate patient-reported outcome instruments and the remainder using AI for modelling (6). Dossier submission is clearly a nascent application of AI, but the abridged timeline for JCA dossier development may accelerate the adoption of AI in this area.

Economic models

With all the buzz around the code-writing skills of large language models (LLMs), economic modelling is an obvious application. In a recent study, a GPT tool fully replicated an economic model with high accuracy, replicating published incremental cost-effectiveness ratios to within 1% (7). Manual editing of the code was required to remove minor errors and simplify some of the model design but, overall, minimal human intervention was required. This use case suggests that LLMs are capable of streamlining several stages in model development, such as double-programming validation of human-built models, in which the AI and a human programmer would build the same model independently and identify errors through comparison of the results. Full automation of model construction with human editing may be a not-too-distant prospect.

But AI can go beyond just replicating the work a human can do; ML has proven more adept at analysing large and complex datasets than traditional statistical methods, identifying patterns in cost and treatment outcomes that may not be apparent to human analysts. By identifying the most important factors influencing the effectiveness and cost of an intervention, we can optimize the weighting of these factors leading to more accurate cost-effectiveness analysis.

Predicting healthcare utilization accurately is notoriously difficult, but it is a key element of healthcare decision-making to inform resource allocation and policy development. Training deep-learning models on historical demographic, clinical and socioeconomic data can allow more accurate predictions of healthcare utilization trends and the impact of incorporating new technologies into healthcare systems. However, deep-learning models can be difficult to interpret due to their complexity and would not currently meet the requirements of decision-makers, who require detailed explanations of model structure. Additionally, AI models can be prone to overfitting, performing well on training data but failing to generalize to new data. Performing cross-validation will help identify when this occurs, and providing AI with a larger training dataset may help to mitigate this issue, where feasible.

Pricing

Never before has there been more data available to inform pricing strategy, so it’s unsurprising that complex pricing structures are becoming more commonplace. Assimilating data from multiple sources (eg clinical trials and patient outcomes and utilization data) and translating them into tailored country-specific strategies is where ML algorithms can come into their own. Deep-learning algorithms can quickly update pricing strategies following changes in drug exclusivity, regulatory guidelines or patent expiry. Given the volatile nature of reference pricing adopted by many European countries, these real-time insights could be invaluable.

Healthcare is lagging behind many other industries, such as automotive, financial and telecoms, which have already adopted AI-based pricing models. The main barrier to the adoption of AI in healthcare is the quality of the available data on competitor pricing due to many pricing agreements remaining confidential, a challenge that is less pronounced in other industries.

What’s next?

Replicability and transparency are common issues with AI-enabled tools, necessitating rigorous validation before use in reimbursement submissions. This is currently the main limiting factor barring widespread adoption of AI in market access and requires a major shift in HTA bodies’ trust of AI. AI could streamline many aspects of the HTA process and strengthen HTA decision-making, particularly through use of deep-learning models. MLM-generated economic models offer a practical solution to the challenges faced by countries with emerging HTA capabilities, as skills shortages are a common barrier to conducting economic analysis tailored to their healthcare system. This is particularly relevant with the introduction of JCA in the EU, following which economic evaluation will be the focus of HTA for many countries.

The major benefit of adopting AI in market access is that it can save time by automating repetitive tasks, but it will always require expert oversight to ensure accuracy. Moreover, AI can enhance human analysis by identifying patterns that might otherwise be missed, such as factors contributing to variations in costs or clinical outcomes. However, the inherent risk of confidential data leaks underscores the importance of using closed systems to safeguard sensitive information, such as retrieval-augmented generation (RAG) models.

RAG models appear to be better suited for market access compared to general-purpose LLMs as they can pull in relevant up-to-date information from external materials, including proprietary data during the generation process. Results produced by RAG LLMs are therefore of higher accuracy, with minimal hallucinations and benefit from further context. RAG models can be fine-tuned to specific tasks and given training through input of historical data. Like any other AI model, they have great potential but require an expert eye to validate the findings.

Ultimately, while AI holds immense potential for transforming market access, it is essential to balance its use with expert validation and robust data protection measures. By doing so, we can harness the power of AI while mitigating its risks, and better achieve our goal of delivering timely and affordable access to healthcare products and services to patients.

With or without AI, our market access and digital specialists are always pleased to help with any challenges you may be facing. Looking to the future, we’ll be demonstrating our RAG tool to interested clients later in Q4 2024 – To keep up with the latest insights, visit the AMICULUM News and Insights page.

References

  1. Adamson B, Waskom M, Blarre Aet al.Approach to machine learning for extraction of real-world data variables from electronic health records. Front Pharmacol 2023;14:1180962.
  2. D’Amico S, Dall’Olio D, Sala Cet al.Synthetic data generation by artificial intelligence to accelerate research and precision medicine in hematology. JCO Clin Cancer Inform 2023;7:e2300021.
  3. de la Torre-López J, Ramírez A, Romero JR. Artificial intelligence to automate the systematic review of scientific literature. Computing 2023;105:2171-94.
  4. Gue CCY, Rahim NDA, Rojas-Carabali Wet al.Evaluating the OpenAI’s GPT-3.5 Turbo’s performance in extracting information from scientific articles on diabetic retinopathy. Syst Rev 2024;13:135.
  5. NICE. Use of AI in evidence generation: NICE position statement. 2024. Available here
  6. Szawara P, Zlateva J, Kotseva Fet al.HTA364 Can artificial intelligence and machine learning be used to demonstrate the value of a technology for HTA decision-making? Value Health 2023;26:S390.
  7. Reason T, Rawlinson W, Langham Jet al.Artificial intelligence to automate health economic modelling: A case study to evaluate the potential application of large language models. Pharmacoecon Open 2024;8:191-203.

This content was provided by Amiculum

Company Details

 Latest Content from  Amiculum