What if an AI made your diagnosis? Or helped decide your medication dosage? For many, this sounds like something out of the future. But in pharmaceutical research and care, this has long been part of everyday life. Already, 54% of biopharmaceutical companies use AI in the life sciences sector (Source). Algorithms detect tumour cells, analyse data from clinical trials and conduct conversations with patients. But the crucial question is: how responsibly do we design this application?
Who decides whether a system is trustworthy enough to work with sensitive health data? Who bears the consequences if an AI recommendation leads to a misdiagnosis or incorrect treatment? And how do we ensure that, amid all the technological dynamism, one thing is not lost: the human being?
“AI can only be successful when technical excellence meets ethical responsibility – an attitude that pharmaceutical companies must actively embrace.”
Manja Rehfeld, PR-Expert at Mashup Communications
Many pharmaceutical companies use AI not only as a pilot project, but as an integral part of their processes. Areas of application include:
All these advances open up enormous potential. However, they also raise questions of fairness, transparency and reliability.
The EU AI Regulation came into force in August 2024 and defines applications in the healthcare sector as high-risk systems. These are subject to particularly strict requirements in terms of safety, traceability and transparency. Companies must comply with all requirements by August 2026, otherwise they face severe penalties of up to €35 million or 7% of their global annual turnover.
Specifically, this means:
These requirements are not just bureaucracy, but essential protection for patients.
Technological innovations in medicine always raise ethical questions. AI can efficiently analyse large amounts of data, but it cannot take the human dimension into account – such as fear, uncertainty or individual context.
Therefore, it requires not only technical excellence, but also ethical judgement and responsible decision-making. AI is a valuable tool, but it must always remain embedded in human action.
Clear structures within the company are necessary in order to use AI reliably and responsibly. Important questions in this regard include:
Companies such as Johnson & Johnson and Merck are leading the way and have already provided comprehensive training to over 50,000 employees. (Source) They are establishing clear governance structures to systematically promote ethical and technical skills.
Clear roles and processes are therefore needed to deal with these issues. Some companies work with internal ethics committees, while others bring in external experts. Above all, it is important to have a space where technical, medical and social perspectives can come together. Not every decision has to be perfect, but it must be well-founded and well-considered.
This creates a culture in which new technologies are not introduced blindly, but are consciously designed. Ultimately, this not only improves the systems, but also strengthens trust in the company.
The goal should not be to replace human labour, but to complement it in a meaningful way. This also means that AI must be designed and used in such a way that it not only respects our values, but actively protects them. This is especially important in a field as sensitive as pharmacy. Ethical principles must be consistently implemented:
Patients must be able to understand how AI is integrated into their treatment.
AI must comply with the highest data protection standards.
AI should communicate in a patient-centred manner to build trust.
Artificial intelligence remains difficult for many people to grasp. Especially in medicine and pharmacy, where people’s own health is at stake, uncertainty quickly arises. That is why it is crucial how the use of AI is discussed, both internally and externally.
Communication here must not be technical or convoluted. Patients want to understand how their data is being used, whether they can make their own decisions, and whether a system really responds to their individual situation. Employees also have questions: Will their expertise be replaced? What responsibilities will remain with them?
Pharmaceutical companies should therefore view new regulations not as a hindrance, but as an opportunity for communication, and speak openly about how they work with AI, what ethical standards they set for themselves, and how they ensure a connection between technology and humanity.
Good communication means openly explaining where AI can help and where its limitations lie. Those who are transparent in this regard build bridges where uncertainty would otherwise arise.
In many discussions about technology, ethics is understood as an afterthought, something that comes into play when problems arise. But in practice, the opposite is true: thinking ethically before systems are developed or introduced avoids many problems from the outset.
This attitude is indispensable, especially in the pharmaceutical and healthcare industries, where people’s well-being and often their lives are at stake. Ethics is not a luxury here, but a necessity. It starts with simple questions: Do we really want this application? Do we know how it works? And: Would we use it on ourselves or our loved ones?
AI that works with data on illnesses, diagnoses or lifestyle encroaches deeply on privacy. That is why it is crucial how this data is processed, stored and used – and whether those affected have any influence over this. When such questions become a matter of course, ethics is no longer an add-on, but part of a company’s DNA. Because in this area in particular, ethical conduct means much more than mere compliance.
Artificial intelligence can improve medical care – but only if we design it responsibly. It can provide earlier warnings, enable more targeted treatment and reduce workloads. But technological progress is not a sure-fire success. What matters is what we do with it today. The EU AI Regulation provides initial guidelines for this. But the real attitude comes from within companies: through decisions, structures and honest communication.
Anyone who uses AI in the pharmaceutical industry must take responsibility – not at some point in the future, but now. Not only when mistakes happen. Not only because it is required by law. But out of conviction. This responsibility applies not only to technology, but above all to the people whose health, data and trust are at stake. Because real progress can only be achieved when ethical awareness and technological possibilities come together. Progress that is not only efficient, but also fair. That not only improves processes, but also cooperation. And that shows that modern technology and the human touch are not mutually exclusive – but are strong precisely where it really counts.
54% of biopharmaceutical companies use AI in the life sciences sector, primarily in drug discovery, diagnostics and patient care.
Clinical data analysis times can be reduced by up to 50%, and study recruitment can be accelerated by around 30%.
High-risk applications such as diagnostic systems must meet strict transparency, documentation and safety requirements by August 2026.
Penalties of up to €35 million or 7% of global annual turnover may be imposed.
Through clear governance structures, interdisciplinary ethics committees and transparent communication with all stakeholders.
Would you like to learn more about our five pillars of responsible collaboration with artificial intelligence? Then click here.
Just a few years ago, employer branding at many traditional companies was characterized by slick…
Generative Engine Optimization (GEO) makes your brand visible to AI.
Employer branding thrives on proximity, understanding, and authenticity. Can AI play a role here at…
Imagine if data could speak and tell stories that the audience not only understands, but…
In the age of generative AI, content overload, and dwindling trust, trade media, industry news,…
The classic Google search is being replaced by AI-generated answers. For companies to appear in…