- Insightera
- Posts
- The Illusion of Reason: Why AI Fails Aristotle’s Test of Thought
The Illusion of Reason: Why AI Fails Aristotle’s Test of Thought
Also about EU regulations, new unicorns and police's usage of AI

Executive Summary
After August break, we are back with the latest news in the AI area and new research article. This time, we have deciphered what thinking really means and how AI applies to this.
News:
EU is very delayed on AI regulations.
iPhone 17 is coming.
10 more unicorns were created in Europe this year.
AI is being used in area where it (probably) shouldn't.
Was this email forwarded to you? Subscribe here.
News
🇪🇺 Many laggers in EU on AI Act regulators. Link
Only seven EU countries have formally named AI Act supervisors so far, with Portugal among the many laggards alongside France, Germany, Italy, and Belgium.
The AI Board pushed its September meeting to October as implementation stalls. Spain stands out with a dedicated AESIA agency already in place.
Companies now face compliance uncertainty and potential enforcement risk as EU deadlines slip and national authorities rush to stand up oversight for general‑purpose AI through 2027.
🍎 iPhone 17: timing, models, and AI angle. Link
Apple’s next iPhone lineup will be shown on 9th of September with a rumored ultra‑thin “iPhone 17 Air,” alongside Pro/Pro Max hardware updates.
Reporting points to design tweaks, camera upgrades, and performance gains.
A refreshed lineup helps sustain upgrade demand ahead of deeper AI features, reinforcing ecosystem engagement and premium pricing.
🦄 Europe’s unicorn count climbs in 2025. Link
Despite fewer mega‑rounds, Europe minted a dozen new unicorns in H1 2025, with momentum likely to continue as funding season restarts.
Sectors drawing capital include biotech, defense tech, and AI, with examples like Sweden’s Lovable and the UK’s Fuse Energy crossing $1B valuations.
Investor conviction in AI‑first and hard‑tech theses points to more late‑stage rounds and selective IPO windows into 2026.
🚔 Police call centers tap AI due to staffing crisis. Link
Chronic understaffing is pushing U.S. 911 centers to pilot AI voice assistants for non‑emergency calls and triage.
Vendors route true emergencies to humans while automating intake and reporting in dozens of centers.
If efficacy and guardrails are maintained, AI can reduce dispatcher burnout and queue times, though accountability and failure‑mode oversight remain essential to adoption
Discussion

The Illusion of Reason: Why AI Fails Aristotle’s Test of Thought
I tend to hear more and more speculation that AI is capable of thinking and reasoning, and that intelligence is no longer the prerogative of humans alone. However, is it really the case? Are machines really able to ‘think’ or is it just a simulation of intelligence?
To analyse it , we need a fixed and defined paradigm, the one which will allow us to evaluate AI reasoning process according to the specific parameters, compare it to human reasoning, and decide whether what ChatGPT does can truly be called ‘thinking’.
As a paradigm to evaluate AI’s ‘thinking’ we will use knowledge work of Aristotle, one of the first philosophers who deciphered thinking and knowledge into 3 brackets: Sophia (theory), phronesis (practice) and nous (intuition).
Nous refers to the ability of understanding the first principles directly. While AI systems can apply learned patterns, they show no evidence of the intuitive intellectual insight that Aristotle considered fundamental to human thinking. LMs cannot step back from their processing and grasp the underlying principles that govern their operations.
We can see this fallback when models fail after minor rephrases. In addition, the outcome of LMs is dependent on probability distributions, not on apprehending foundational truths independent of training distributions. That’s why correct phrasing and prompt engineering leads to significant accuracy boost.
For a second aspect, phronesis, Aristotle highlights a necessity of lived experience for a full human wisdom. While current AI is capable of analysing large quantities of data, suggesting strategies and simulate reflective thinking, software systems have no lived experience of the consequences of actions. They might have information about the context, but are lacking the actual understanding of the meaning behind it.
It might be even said that actual physical body is required to fully experience and practice the knowledge. So, phronesis is also not where AI shines at the moment.
Last component, sophia, refers to the ability to apply knowledge and reuse it systematically across domains. Evaluations of “reasoning models” show that models often fail to use explicit algorithms, reason inconsistently, and degrade past a complexity threshold, which contradicts the stability to fully be in line with sophia.
Without nous, AI cannot anchor reference and use first principles. It mostly spots patterns that co-occur, not the underlying causes or axioms that make those patterns necessarily true.
Without phronesis, it lacks the judgment that comes from experience and careful perception of what matters in each specific case.
Without sophia, it cannot integrate principles and demonstrations into robust, transferable theoretical understanding that work even if the context is changing.
So, we can conclude that, according to Aristotle’s dimensions, AI is not capable of thinking. To a large extent, it only imitates the human thought.
Our service
Do you want to deeply understand your users? Reply directly to this email or reach out to [email protected]. More information about our service is available here.
How did we do this week? |
Was this email forwarded to you? Subscribe here.