
Research conducted at the speed of light, synthetic personas responding instantly to complex questioning and algorithms rifling through the deep recesses of data for insights we didn’t even know we needed. It is a terabyte goldmine.
With its capacity and reach growing daily, the algorithms and machinery of computational learning herald a new age where remote, platforms, cyber and digital are the workhorse words of everyday healthcare.
The promise to tackle and treat conditions with targeted and economically viable therapies is beguiling, but it dissipates without the human touch. Artificial intelligence will always be…artificial.
And it is prone to mistakes – some of them alarming – as the EMA’s guiding principles of using large language models (LLMs) – the computer programmes that are trained on huge data sets – warns, “The use of LLMs is not without risk. LLMs have shown surprising failure at seemingly trivial tasks, returning irrelevant or inaccurate responses – known as hallucinations.”
How we make the best of AI is the echoing thought that greets every new announcement of AI’s potential and regulators around the world now view it as one of their prime purposes.
Laura Squire, Chief Quality and Access Officer at the UK’s MHRA, underscored the need for overwatch and partnership as she welcomed the UK government’s £500m commitment to become a science and technology superpower by 2030.
She commented, “The quest to control its (AI) capabilities becomes more focused and essential every day and regulators around the world are collaborating with pharma to bring in the right controls that provide directional guidelines without compromising potential.”
Juan Equihua, Head of AI at Havas Lynx, the global healthcare communications agency, sees an exciting future for AI as a tool to facilitate and elevate the creativity and decision-making that is at the core of the company’s service.
“We embrace AI’s potential but we are also pragmatic. There are so many ways it can be used to empower people to do their jobs better, principally by freeing up their time to do what they do best – the creativity and the critical thinking,” he says.
“LLMs cannot make decisions but they are good at summarising and collating research, taking away the mundane work, and these capabilities allow us to focus on the more strategic tasks such as delivering efficient and effective campaigns for our clients. AI is there to enhance creativity and deliver meaningful high-quality outcomes.
“AI should augment our skills and workflows, not replace human creativity and judgment.”
Synthetic personas
AI is here to stay and its influence is spreading across healthcare and industry. Havas Lynx is maximising its capacity to process large data quickly, uncover patterns and insights that might be missed by human analysis and automate repetitive tasks.
Juan adds, “Our synthetic personas have been particularly effective in simulating diverse healthcare professional (HCP) experiences and outcomes, allowing us to anticipate and address potential communication challenges before they arise, as well as significantly reducing time to test campaigns.”
But a critical point for him is that AI is subordinate to human decision-making. He comments, “We follow a principle of AI augmentation rather than replacement. This means that AI tools are used to support and enhance human decision-making, not to replace it.
“We work in a human-first approach, which means any AI output is ultimately a human responsibility, and AI content is not produced and published in its generic form.”
Read the article in full here.




