AI in Pharmacy: What ethical Challenges do Companies need to address now?

What if an AI made your diagnosis? Or helped decide your medication dosage? For many, this sounds like something out of the future. But in pharmaceutical research and care, this has long been part of everyday life. Already, 54% of biopharmaceutical companies use AI in the life sciences sector (Source). Algorithms detect tumour cells, analyse data from clinical trials and conduct conversations with patients. But the crucial question is: how responsibly do we design this application?
Who decides whether a system is trustworthy enough to work with sensitive health data? Who bears the consequences if an AI recommendation leads to a misdiagnosis or incorrect treatment? And how do we ensure that, amid all the technological dynamism, one thing is not lost: the human being?
“AI can only be successful when technical excellence meets ethical responsibility – an attitude that pharmaceutical companies must actively embrace.”
Manja Rehfeld, PR-Expert at Mashup Communications
How is AI already changing pharmaceutical practice today?
Many pharmaceutical companies use AI not only as a pilot project, but as an integral part of their processes. Areas of application include:
- Drug development: AI reduces clinical data analysis time by up to 50% and speeds up the recruitment of study participants by around 30%. (Source)
- Medical diagnostics: Algorithms evaluate imaging procedures and detect anomalies that often escape the human eye.
- Patient care: Wearables and smart devices provide continuous health data, which AI systems evaluate to identify risks at an early stage.
All these advances open up enormous potential. However, they also raise questions of fairness, transparency and reliability.
EU AI Regulation: What do pharmaceutical companies need to bear in mind?
The EU AI Regulation came into force in August 2024 and defines applications in the healthcare sector as high-risk systems. These are subject to particularly strict requirements in terms of safety, traceability and transparency. Companies must comply with all requirements by August 2026, otherwise they face severe penalties of up to €35 million or 7% of their global annual turnover.
Specifically, this means:
- Companies must document how AI systems work and what safeguards are in place.
- It must be demonstrated that the data used has been selected fairly and does not disadvantage any groups of people.
- Responsibility always remains with humans; AI must not be a black box.
These requirements are not just bureaucracy, but essential protection for patients.
Why is technical progress alone not enough?
Technological innovations in medicine always raise ethical questions. AI can efficiently analyse large amounts of data, but it cannot take the human dimension into account – such as fear, uncertainty or individual context.
Therefore, it requires not only technical excellence, but also ethical judgement and responsible decision-making. AI is a valuable tool, but it must always remain embedded in human action.
How do companies incorporate ethical responsibility into their AI strategy?
Clear structures within the company are necessary in order to use AI reliably and responsibly. Important questions in this regard include:
- Who checks the data quality on which AI is based?
- Who decides on the actual deployment of a system?
- What happens when AI malfunctions?
Companies such as Johnson & Johnson and Merck are leading the way and have already provided comprehensive training to over 50,000 employees. (Source) They are establishing clear governance structures to systematically promote ethical and technical skills.
Clear roles and processes are therefore needed to deal with these issues. Some companies work with internal ethics committees, while others bring in external experts. Above all, it is important to have a space where technical, medical and social perspectives can come together. Not every decision has to be perfect, but it must be well-founded and well-considered.
This creates a culture in which new technologies are not introduced blindly, but are consciously designed. Ultimately, this not only improves the systems, but also strengthens trust in the company.
Man and machine: how does a responsible partnership work?
The goal should not be to replace human labour, but to complement it in a meaningful way. This also means that AI must be designed and used in such a way that it not only respects our values, but actively protects them. This is especially important in a field as sensitive as pharmacy. Ethical principles must be consistently implemented:
1. Transparency
Patients must be able to understand how AI is integrated into their treatment.
2. Data protection
AI must comply with the highest data protection standards.
3. Empathy
AI should communicate in a patient-centred manner to build trust.
How can trustworthy communication about AI be achieved in the healthcare industry?
Artificial intelligence remains difficult for many people to grasp. Especially in medicine and pharmacy, where people’s own health is at stake, uncertainty quickly arises. That is why it is crucial how the use of AI is discussed, both internally and externally.
Communication here must not be technical or convoluted. Patients want to understand how their data is being used, whether they can make their own decisions, and whether a system really responds to their individual situation. Employees also have questions: Will their expertise be replaced? What responsibilities will remain with them?
Pharmaceutical companies should therefore view new regulations not as a hindrance, but as an opportunity for communication, and speak openly about how they work with AI, what ethical standards they set for themselves, and how they ensure a connection between technology and humanity.
Good communication means openly explaining where AI can help and where its limitations lie. Those who are transparent in this regard build bridges where uncertainty would otherwise arise.
Why must ethics become an integral part of a company’s DNA?
In many discussions about technology, ethics is understood as an afterthought, something that comes into play when problems arise. But in practice, the opposite is true: thinking ethically before systems are developed or introduced avoids many problems from the outset.
This attitude is indispensable, especially in the pharmaceutical and healthcare industries, where people’s well-being and often their lives are at stake. Ethics is not a luxury here, but a necessity. It starts with simple questions: Do we really want this application? Do we know how it works? And: Would we use it on ourselves or our loved ones?
AI that works with data on illnesses, diagnoses or lifestyle encroaches deeply on privacy. That is why it is crucial how this data is processed, stored and used – and whether those affected have any influence over this. When such questions become a matter of course, ethics is no longer an add-on, but part of a company’s DNA. Because in this area in particular, ethical conduct means much more than mere compliance.
Conclusion: Why attitude towards AI is crucial for success
Artificial intelligence can improve medical care – but only if we design it responsibly. It can provide earlier warnings, enable more targeted treatment and reduce workloads. But technological progress is not a sure-fire success. What matters is what we do with it today. The EU AI Regulation provides initial guidelines for this. But the real attitude comes from within companies: through decisions, structures and honest communication.
Anyone who uses AI in the pharmaceutical industry must take responsibility – not at some point in the future, but now. Not only when mistakes happen. Not only because it is required by law. But out of conviction. This responsibility applies not only to technology, but above all to the people whose health, data and trust are at stake. Because real progress can only be achieved when ethical awareness and technological possibilities come together. Progress that is not only efficient, but also fair. That not only improves processes, but also cooperation. And that shows that modern technology and the human touch are not mutually exclusive – but are strong precisely where it really counts.
FAQ – AI in pharmacy
1. How is AI currently changing the pharmaceutical industry?
54% of biopharmaceutical companies use AI in the life sciences sector, primarily in drug discovery, diagnostics and patient care.
2. What advantages does AI offer for research and development?
Clinical data analysis times can be reduced by up to 50%, and study recruitment can be accelerated by around 30%.
3. What obligations does the EU AI Regulation impose on the pharmaceutical industry?
High-risk applications such as diagnostic systems must meet strict transparency, documentation and safety requirements by August 2026.
4. What happens in the event of violations of the EU AI Regulation?
Penalties of up to €35 million or 7% of global annual turnover may be imposed.
5. How can companies integrate ethical responsibility into AI projects?
Through clear governance structures, interdisciplinary ethics committees and transparent communication with all stakeholders.
Would you like to learn more about our five pillars of responsible collaboration with artificial intelligence? Then click here.
Share this article