Artificial Intelligence (AI) is marking a turning point in the history of technology, and 2025 will bring even more surprises.
Predicting exactly what lies ahead isn’t easy, but it’s possible to highlight trends and challenges that will define the near future of AI. Among these is the challenge of the so-called "centaur doctor" or "centaur teacher," which is crucial for those of us working in AI development.
AI has become a crucial tool for addressing major scientific challenges. Fields like healthcare, astronomy, space exploration, neuroscience, and climate change, among others, will benefit even more than they already are.
AlphaFold (which won the Nobel Prize in 2024) has determined the three-dimensional structure of 200 million proteins—virtually all of the ones known.
This development represents a significant breakthrough in molecular biology and medicine, enabling the design of new drugs and treatments. In 2025, we’ll see the widespread use of this (which is open access, by the way).
Meanwhile, ClimateNet uses artificial neural networks to perform precise spatial and temporal analysis of large volumes of climate data—vital for understanding and mitigating global warming.
The use of ClimateNet will be essential in 2025 for predicting extreme weather events with greater accuracy.
Both the justice system and medical diagnoses are considered high-risk scenarios. It’s more urgent than ever to establish systems where humans always have the final decision.
AI experts are working to ensure user trust, transparency, protection of individuals, and that humans remain at the center of decision-making.
This is where the “centaur doctor” challenge comes into play. Centaurs are hybrid human-algorithm models that combine the formal analysis of machines with human intuition.
A "centaur doctor + an AI system" enhances the decisions made by humans on their own and those made by AI systems on their own.
A doctor will always be the one to hit the accept button, and a judge will determine if the ruling is fair.
Autonomous AI agents based on language models are the goal for 2025 for major tech companies like OpenAI (ChatGPT), Meta (LLaMA), Google (Gemini), and Anthropic (Claude).
So far, these AI systems offer recommendations, but by 2025, they are expected to make decisions on our behalf.
AI agents will perform personalized and precise actions on tasks that aren’t high-risk, always aligned with the user’s needs and preferences. For example: buying a bus ticket, updating a calendar, recommending a specific purchase, and completing it.
They’ll also be able to answer our emails—something that takes up a lot of our time daily.
In this regard, OpenAI has launched AgentGPT, and Google has introduced Gemini 2.0, platforms for developing autonomous AI agents.
Meanwhile, Anthropic has introduced two updated versions of its language model Claude: Haiku and Sonnet.
Sonnet can use a computer just like a person would. This means it can move the cursor, click buttons, type text, and navigate through screens.
It also enables a function to automate our desktop, allowing users to give Claude access and control over certain aspects of their personal computers, just as people do.
This feature, known as "computer usage," could revolutionize how we automate and manage our daily tasks.
In e-commerce, autonomous AI agents could make purchases for users, provide advice for business decisions, manage inventory automatically, work with suppliers (including logistics providers) to optimize the restocking process, update shipping statuses, generate invoices, and more.
In education, they could personalize lesson plans for students, identify areas for improvement, and suggest suitable learning resources.
We’ll move toward the concept of a "centaur teacher," supported by AI agents in education.
The idea of autonomous agents raises profound questions about the concept of "human autonomy and control." What does "autonomy" really mean?
These AI agents will introduce the need for pre-approval. What decisions will we allow these entities to make without our direct approval (without human control)?
We face a critical dilemma: knowing when it’s better to be “automated” in the use of autonomous AI agents and when we need to make the decision ourselves—i.e., rely on "human control" or "human-AI interaction."
The concept of pre-approval will become crucial in the use of autonomous AI agents.
2025 will be the year of the expansion of small, open language models (SLMs).
These language models could eventually be installed on mobile devices, enabling much more personalized and intelligent voice control of our phones than assistants like Siri. They’ll also be able to answer emails for us.
SLMs are compact and more efficient, requiring no massive servers to function. These open-source solutions can be trained for specific application scenarios.
They may be more privacy-conscious and ideal for use in low-cost computers and smartphones.
SLMs will be of interest for business adoption, as they will be cheaper, more transparent, and potentially more auditable.
These small models will enable the integration of applications for medical recommendations, education, automatic translation, text summarization, and instant spelling and grammar correction—all from small devices without needing an internet connection.
Among their significant social advantages, they could help spread language models in education in underserved areas.
They could improve access to diagnoses and recommendations with health-focused SLMs in resource-limited areas.
Their development is crucial to supporting communities with fewer resources.
They may accelerate the presence of the "centaur teacher or doctor" in any part of the world.
On June 13, 2024, the European AI regulation was approved, which will come into effect in two years. In 2025, standards and evaluation guidelines will be created, including ISO and IEEE standards.
Earlier, in 2020, the European Commission published the first Assessment List for Trustworthy AI (ALTAI). This list includes seven requirements: human agency and oversight, technical robustness and safety, data governance and privacy, transparency, diversity, non-discrimination and fairness, social and environmental well-being, and accountability. These will form the foundation of future European standards.
Having evaluation standards is key to auditing AI systems. Let’s take the example of an autonomous vehicle having an accident—who is responsible? The regulatory framework will address issues like these.
Dario Amodei, CEO of Anthropic, in his essay Machines of Loving Grace (October 2024), outlines the vision of major tech companies: "I believe it’s essential to have a truly inspiring vision for the future, not just a plan to put out fires."
There are contrasting visions from other, more critical thinkers, such as Yuval Noah Harari, discussed in his book Nexus.
Therefore, we need regulation. It provides the balance necessary for the development of trustworthy and responsible AI and enables us to tackle the major challenges for the benefit of humanity, as highlighted by Amodei.
Along with this, we need the governance mechanisms in place to handle "firefighting" plans.
Comentarios