Between Intelligence and Ignorance: lessons from the frontlines of AI
From machine learning to large language models, the leading expert Georg Gottlob reflects on the milestones, risks, and future of AI—while advocating for a more human-centered approach.
The evolution of artificial intelligence has been marked by stunning breakthroughs—from the early triumphs of machine learning to the recent emergence of generative models that seem to think, write, and even create like humans. But alongside the marvels lie deep inconsistencies, ethical challenges, and a fundamental truth: without human intelligence, there is no artificial intelligence.
Today we’re surfing with Georg Gottlob ,computer scientist and Professor at the University of Calabria.
Before diving into this journey, make sure to be the first to access exclusive content.
Throughout your career, you’ve experienced firsthand the remarkable evolution of technology and systems. In your opinion, there has been a true turning point — a moment that marked a definitive shift? If so, which one?
The rise of machine learning that effectively works marked a major breakthrough, allowing computers to learn from data and tackle complex problems.
This progress became clear when a computer first defeated a world chess champion in 1997, showing machines could outthink humans in strategic games. Since then, machine learning has transformed medicine by improving diagnosis and speeding up drug discovery, where machine-learning based techniques have solved the protein folding puzzle, helping design better medicines. Robotics and self-driving cars have significantly advanced, relying on machines’ ability to recognize patterns in vast amounts of information.
Yet, this was just the beginning.
In my opinion, the real leap came with large language models, which combine machine learning with knowledge drawn from billions of texts. These models can understand and generate language in a way that feels natural and surprising. What’s most fascinating is that they suggest human language may be more statistical in nature than many thought.
To me, this is a true revolution in how we understand and use intelligence.
The landscape of AI is evolving at an unprecedented pace, with large language models and generative AI now at the forefront. From your perspective, what are the most exciting opportunities and, conversely, the most significant challenges or ethical considerations that these new paradigms present for the future of AI?
AI is evolving at an unprecedented pace, with breakthroughs like autonomous driving already promising to improve safety and even benefit distracted drivers like myself.
Yet, the most profound shift is now unfolding with generative AI and large language models, which are set to deeply transform office work, programming, and a wide range of intellectual tasks.
What excites me most is the idea that generative AI will become a true companion, not just automating repetitive or bureaucratic work, but also enhancing our creativity by proposing alternative solutions to complex problems.
However, these opportunities come with significant challenges. Overreliance on AI risks weakening our own judgment, and the biases present in training data can lead to unfair or skewed results.
Moreover, the lack of transparency in how these models generate answers raises ethical concerns, especially in sensitive domains like medicine or law.
The future of generative AI depends on thoughtful use—augmenting human potential while staying alert to its limitations and ethical risks.
You’ve stated that “without natural intelligence, there is no artificial intelligence.” The impact of AI on the job market is indeed at the centre of a controversial debate. In your view, what is the role of humans in monitoring, verifying, and supervising AI systems today?
It is well-known that AI will inevitably lead to job losses, and there is a real risk that fewer new jobs may be created than are lost, presenting a major challenge that only politics and society—not AI itself—can address. If managed wisely, this shift could result in shorter working hours and longer vacations for everyone; if mishandled, it could trigger a severe crisis. Addressing this will require significant effort, including large-scale retraining and education.
At the same time, many new roles are emerging in and around AI, as these systems are human creations, designed, trained, and refined by skilled scientists and engineers. For example, the University of Calabria, where I work, is now offering an integrated program in medicine and digital technologies, preparing graduates who are both medical doctors and computer scientists with significant AI skills.
There is also a growing need for human mediators between people and AI systems, especially in sensitive sectors like banking, where clients often prefer human advice for critical operations. Just as MOOCs (Massive Open Online Courses) did not replace university professors but complemented their roles, I believe AI will not be used where it disrupts essential human communication and peer interaction.
At the opening ceremony of the University of Calabria’s Academic Year back in 2023, you offered the audience a very meaningful insight on the relationship between “Artificial Intelligence” and “Artificial Ignorance”. How would you explain this concept? How important is it today to anticipate and manage the “ignorance” of AI systems during their design and what techniques do you recommend to address this effectively?
While large language models can produce impressive and fluent texts, they are prone to making significant errors, especially when dealing with facts or nuanced judgments not easily found in trusted sources.
Too often, it seems these systems fail to integrate all relevant knowledge before reaching a conclusion, leading to what I call “artificial ignorance”—the tendency of AI to overlook or ignore key facts and commit logical fallacies, even while sounding persuasive.
For example, I recently asked an LLM to list competitors of a famous AI and business intelligence software vendor that had shifted its core business away from enterprise software to focus primarily on Bitcoin treasury operations. Initially, the model provided a list of business intelligence firms. However, when asked about the company's main activity, it correctly identified the Bitcoin-related focus—only to revert to the original list of enterprise software competitors when the question was repeated. It didn’t include any competitor from the Bitcoin treasury space. This inconsistency highlights how LLMs can act as if they have a fragmented or inconsistent understanding, much like a person with multiple personalities.
Anticipating and managing this “ignorance” is crucial: designers and users must combine error prevention—through careful data curation, prompt engineering, and validation—with robust error handling, such as fallback mechanisms and human oversight, to ensure AI systems remain reliable and trustworthy.
We are currently studying methods of fully automated prompt engineering to improve the accuracy of LLMs. Our Data Analysis system steers an LLM to obtain more precise answers and can even use LLMs to find rules for automatically identifying anomalies in large datasets.
Your decision to move from Oxford to Calabria some time ago drew a lot of attention. What were the main motivations behind this decision, and what opportunities do you see for Southern Italy in the field of artificial intelligence?
My longstanding collaboration with the University of Calabria, dating back to my years in Vienna and Oxford, was a key motivation for my move. UniCal has long excelled in AI and databases, the very fields I am passionate about. I have had the pleasure of working with Professor Domenico Saccà’s outstanding group and many of his former students, hosting numerous postdocs and PhD students whose talent and motivation deeply impressed me.
Becoming part of this vibrant university—recently ranked first among large Italian universities by CENSIS—is truly rewarding. The region of Calabria offers not only a high quality of life, nestled between sea and mountains, but also a dynamic atmosphere reminiscent of a new entrepreneurial era. The thriving startup scene around the university here and the arrival of major companies make it an exciting hub for AI innovation, and I am delighted to contribute to this momentum.
Georg Gottlob is an Austrian-British computer scientist and Fellow of the Royal Society, internationally recognized for his foundational work in logic in computer science, artificial intelligence, and database theory. He was previously a professor at Oxford University and TU Wien, and is now a Professor at the University of Calabria, where he collaborates on AI and data reasoning projects.
Gottlob has previously co-founded three startup companies that were acquired by international players such as McKinsey and Meltwater. He is currently incorporating the new spin-off “Unlimidata” in Calabria, as an affiliate of the London-based Unlimidata Limited. His research spans topics such as knowledge graphs, non-monotonic logic, and scalable reasoning systems. He is also a recipient of major European research grants and a strong advocate for interdisciplinary innovation.