Apple's study reveals the limits of AI reasoning, debunking myths of AGI. Can AI truly think, or is it just an illusion? Let's explore the findings.
In a world captivated by the wonders of artificial intelligence (AI), it's easy to get carried away with visions of superintelligent machines and AI systems that surpass human intellect. However, a recent study by Apple throws a bucket of cold water on these fiery imaginations, questioning the true capabilities of current AI and, notably, Large Language Models (LLMs).
In the paper titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity", authors Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar bring a critical perspective to the AI discourse. They challenge the claims made by creators of LLMs, suggesting that the idea of nearing Artificial General Intelligence (AGI) or superintelligence is far-fetched.
The researchers went beyond standard benchmarking tests, critiqued within AI circles and beyond for their narrow focus. Instead, they looked at how LLMs and their supposedly more self-aware counterparts, Large Reasoning Models (LRMs), tackle complex cognitive challenges like the Towers of Hanoi or safely ferrying individuals across a river—all without disaster.
These puzzles act as a litmus test for true reasoning—a task that is particularly arduous for LLMs which lack not just a grasp of correctness but also any actual connection to reality.
Apple's team made several key observations:
The study highlighted peculiarities too, such as LRMs cutting back on computational power in the face of increasingly complex tasks even when they had reserves to spare—akin, one might say, to a marathon runner quitting at mile 20 despite having plenty of energy supplies left.
Another odd behavior noted by the researchers was "Overthinking" in LRMs. When solving easier problems, they would continue to analyze beyond the point of reaching a correct outcome, thus consuming unnecessary computational power and energy, which has environmental and efficiency implications.
This research challenges the overenthusiastic and sometimes uncritical optimism surrounding generative AI—a burgeoning industry fueled by lofty visions of selling masterclasses, consultations, books, and more. It aligns with sentiments expressed by prominent researchers, such as Yann LeCun, suggesting that superintelligence isn't just around the corner; it may not be coming at all.
We are not going to get to human-level AI by just scaling up LLMs. This is just not going to happen. There's no way — absolutely no way.
Yann LeCun, Chief AI Scientist, Meta
Apple's findings reinforce the importance of grounding our AI expectations in reality. While the hype around AI, especially generative AI, has led to a rush of investments and prognostications, it's crucial to discern facts from fantasy.
Generative AI isn't going anywhere—it will indeed play a vital role in assisting and improving our control over various devices and systems. However, claiming that it will evolve into a superintelligent entity is to sell an illusion, not a reality.
As we move forward, let this study be a reminder to maintain a healthy skepticism and to apply our human wisdom when evaluating the true potential of AI technologies. It's an exciting evolution in our lives, no doubt, but one that must be approached with an informed and discerning perspective.