- Insightera
- Posts
- Someone to blame: AI Implementation Bottleneck
Someone to blame: AI Implementation Bottleneck
Welcome to our new format
Was this email forwarded to you? Subscribe here.
Discussion
Someone to blame: AI Implementation Bottleneck
AI is everywhere in organisations. And yet, where decisions actually matter, nobody trusts it.
We have an investment manager, Paul, who has had their full team of five analysts replaced by AI agents. The agents are producing flawless-looking and extremely convincing research papers at a much faster rate than the old-school human team ever could. With the growing number of papers produced by AI daily, with the goal of enabling better decisions, a problem emerges: the manager simply cannot reliably validate this volume of information. So, should the manager fully trust the final decision made by AI, or spend extra time validating and going through the AI’s thinking process?
If Paul decides to go with the first option, fully trusting the summaries generated by the LLMs, Paul becomes a rubber stamp, taking full accountability for something they have no real knowledge of.
To some degree, when working with humans, Paul would be able to use managerial skills to gauge any sign of an analyst’s doubt, read body language, validate work results, and calibrate confidence. AI removes all of that. Pure, hard information without emotional input has to be analyzed, removing a managerial mechanism for deciding when to push back.
It works until it doesn’t. One big flaw, one piece of incorrect information provided downstream, and all the blame falls on the person who signed off: Paul.
Alternatively, if our hero decides to go through the information generated by AI and ensure its accuracy, the productivity bottleneck becomes tied to Paul’s performance, diminishing the actual AI productivity boost. This is the exact bottleneck that organizations are faced with.
In reality, there is a third option, something in between: Paul tends to skim the AI-generated work, without the capacity to fully validate all its aspects. If the numbers look roughly correct, Paul’s quality stamp is applied, and the decision is passed downstream.
With every new error generated under this setup, Paul accumulates distrust. Errors are blamed on Paul. The AI model that produced the flawed output is never examined, because officially, it wasn't the AI that failed. It was Paul.
At low stakes, Paul's errors are survivable and forgettable. But where decisions truly matter, such as where to invest, failure is visible, career-defining, and permanent. At that level, no manager will outsource their reputation to a system they've never been able to trust.
AI will go exactly as far as managers are willing to stake their reputation on it. So far, that hasn't been very far.
Our service
Do you want to deeply understand your users? Reply directly to this email or reach out to [email protected]. More information about our service is available here.
Was this email forwarded to you? Subscribe here.