In 2025, European experts advanced practical safeguards for medical AI, strengthening on ethics, legal compliance, clinical usability and secure digital infrastructure. Artificial intelligence is becoming an everyday presence in healthcare, from medical imaging to clinical decision support.
As these technologies move closer to patients, one question matters more than ever: can we trust them? In 2025, major steps were taken across Europe to ensure that AI in healthcare is not only innovative, but also safe, ethical, explainable, and designed around real clinical needs. Through cross-sector stakeholder collaboration and regulatory alignment, the European project AI4LUNGS is contributing to turning this vision into practical progress.
VHIO’s role in this project, which includes the participation of Dr. Alex Martínez and Dr. Ilaria Priano medical oncologists and researchers and Lucia Cane, data entry from VHIO’s Thoracic Tumors Group, led by Dr. Enriqueta Felip, is focused on providing retrospective clinical data on patients with non-small cell lung cancer according to the clinical criteria agreed upon by the project’s various clinical partners.
Safer AI for patients, greater confidence for clinicians
One of the most important developments of the year has been the focus on ethical and legal safeguards for medical AI. As new and established European regulations such as the AI Act, MDR and GDPR shape how health technologies are developed and deployed, 2025 saw concrete efforts to translate these rules into real-world practice with key contributions from our legal partner Timelex and our digital ethical expert Deloitte.
Experts in healthcare, law, and ethics worked together to assess risks such as over-reliance on automated systems, lack of transparency, and unclear accountability. The outcome: clearer safeguards to ensure that AI supports rather than replaces human judgement, helping clinicians make better-informed decisions while keeping patients at the centre of care.
For patients, this means greater protection of personal health data and increased confidence that AIenabled tools are designed with appropriate safeguards, monitoring, and accountability.
From promising algorithms to clinically usable tools
Another key shift in 2025 was the move from experimental AI models toward validated, clinically relevant systems. Across Europe, we have seen real-life implementation of AI decision making tools in clinical settings. Clinicians and engineers collaborate to ensure that AI tools are tested for:
- Reliability and usability, beyond accuracy alone
- Consistency across different clinical settings and patient populations
- Clear and interpretable outputs for clinical decision-making
- Integration with existing clinical workflows
Training strategies for healthcare professionals have also been developed, recognising that trust in AI depends as much on understanding as on performance. Building secure, scalable digital health infrastructure Behind every trustworthy AI system is a secure digital backbone. In 2025, AI4LUNGS’ technical partners KPMG, Yonalink, EXUS AI Lab, Fraunhofer ITWM, and RPTU made significant progress in building robust, privacy-preserving infrastructures that allow AI tools to operate safely at scale. This work focused on secure cloud environments, controlled data access, and careful system integration, all essential to ensure that sensitive health data remains protected while enabling innovation. These foundations are critical for future AI services that can be expanded across regions and healthcare systems without compromising safety or quality. While these advances may happen behind the scenes, they directly influence the quality and reliability of the tools that eventually reach clinics, helping ensure that innovation translates into better care rather than added complexity.
A more connected European health AI ecosystem
In 2025, collaboration proved just as important as innovation. AI4LUNGS’ synergies create a growing ecosystem with other EU-funded projects for knowledge exchange on technical solutions and clinical research and potential for privacy-compliant data sharing. Joint events, webinars, shared dissemination activities, and cross-promotion help accelerate learning while building a healthier European ecosystem for responsible medical AI.
Looking ahead
As AI becomes a permanent part of healthcare, progress is no longer measured by technical breakthroughs alone. It is measured by trust, safety, usability, and real benefit to people. Starting from 2026 all the way to early 2027, AI4LUNGS will organise Awareness Workshops either at national level or jointly in Greece, Cyprus, Portugal, Germany, Norway, Italy, Spain and Israel. These workshops aim to build a shared understanding, align strategies, and empower stakeholders to actively shape the future of AI-enabled lung healthcare.
About AI4LUNGS
The AI4LUNGS project officially started on 1 January 2024, with a duration of 3.5 years. Funded by the European Union under the Horizon Europe Programme (Grant Agreement No. 101080756), the project has received €6.9 million. It focuses on computational models for new patient stratification strategies (RIA) under the HORIZON-HLTH-2022-TOOL-12-01-two-stage call. The consortium consists of 18 partners across 10 countries, working together to develop AI-driven solutions aimed at improving lung health.



