- Insightera
- Posts
- Thoughts on AI and Inequality
Thoughts on AI and Inequality
Also about Google, DEI, and Lovable
Executive Summary
In this edition, we dive into the latest in AI and robotics, chat about Apple's postponed updates for Siri, and explore DeepMind's Gemini model, which is making strides in how we interact with the physical world. Plus, we take a look at Lovable AI, a tool that's quickly becoming a favorite for building software. Our discussion post sheds light on how AI is influencing inequality.
News
🍎 Apple is struggling to bring true AI to Siri. Link
Apple has delayed new Siri AI features because they're not ready yet and other projects have become more urgent, with no clear date for release.
Company leaders prefer to wait until the Siri updates work properly instead of releasing unfinished features.
Apple’s AI strategy is questionable: company relies on collaboration with model makers, rather than the development of own AI model. Apple implemented OpenAI’s LLM within its software.
≠ Diversity, Equality and Inclusion takes a toll. Link
Hundreds of companies have swiftly removed references to "diversity, equity, and inclusion" (DEI) from their annual reports following pressure from the Trump administration.
Companies are reducing DEI-related language, statistics, and job postings due to fears of government investigations and unclear guidelines on what policies might be considered illegal discrimination.
While many firms are shifting focus toward broader terms like "inclusion" or emphasising merit-based hiring, some companies continue to actively support DEI initiatives despite the political backlash.
🤖 New model for robots by Google. Link
Google DeepMind introduced Gemini Robotics and Gemini Robotics-ER, new AI models designed to help robots better understand and interact with the physical world through advanced vision, language, and action capabilities.
Gemini Robotics significantly boosts robots' abilities to adapt, respond to conversational instructions, and perform precise tasks like folding origami or packing snacks.
While Gemini Robotics represents a major step forward in robotic capabilities and adaptability, its real-world success will depend on how effectively it addresses safety concerns, and integrates across diverse robot types.
AI tool of the week
Lovable AI
While this tool’s name may have some unrelated associations, it is capable of building simple software for you in seconds. Just describe your idea in plain language, and Lovable instantly creates a working application, complete with design, images, and interactive features.
This Swedish startup has now raised $15 million in a pre-Series A round led by Creandum.
Lovable has achieved 30,000 paying users since its launch in 2023, making it one of the fastest-growing European startups.
However, keep in mind that Lovable has limitations: it can struggle with complex designs, might not speed things up for seasoned coders, and leaves non-technical users uncertain about security and debugging issues.

Screenshot from lovable.dev
Discussion

Karl Marx
How AI will affect inequality
Witnessing the rise of the so-called artificial intelligence, people are speculating on how it will affect our future. Will AI make life better, worse? Are we still going to exist in 20 years? And so on. I want to attempt to draw clarity on what will be the most likely scenario for inequality, given the current trends and observations. Personal bias is not eliminated.
The main trends in the AI market are the following:
1. AI models are becoming more accessible
OpenAI had a first mover advantage, which was quickly undermined by Anthropic, Meta, Google, xAI (someone else was definitely missed). Being in its infancy, the industry has already developed at least 6 well-established players who compete for market share, launching better, more efficient and capable models every quarter.
It’s a classic growth stage, during which companies worry about market share more than profit. As it happens, consumers are enjoying better and cheaper models.
It can be argued that price per model will be reduced even more once the industry enters its maturity, providing constantly better and cheaper products.
2. Top-of-the-market AIs are getting expensive
While, in general, models are becoming cheaper and more widely available, there are also signs of a ‘premium’ market segment. OpenAI already has a ‘Pro’ subscription for $200/month and is rumoured to launch a high-end $20,000/month agent. With this pricing, it becomes largely unavailable to the general public.
Based on this trend, the more a user pays, the higher quality model is received; the more you pay, the smarter and more efficient your AI product will be. If only some users have access to the best model, they will be able to complete their work faster, while those who pay less will be at a disadvantage, leading to self-reinforcing inequality.
And until competitors with better and cheaper alternatives emerge, this premium market niche will be secured. This is the main factor which can lead to inequality within the AI market itself.
In my opinion, due to rapid product development, some degree of premium pricing will persist, even if the $20,000/month figure is just a rumour. Since new models are more expensive to develop and operate, it makes sense to offer them at a premium price.
3. AI is restricted or banned in several industries
Well-established companies, despite awareness that LLMs are able to significantly improve employee performance, have banned their usage. It was reported that 67% of companies have restricted AI usage to some degree, and around a quarter have totally banned it. While this survey can be questioned, I know firsthand that several well-established companies have banned LLMs internally. The main concern is a lack of privacy and proprietary information leakage. Among the main sectors where AI usage is restricted are finance, healthcare and education.
It is up for debate if AI usage inside companies will be loosened once LLM development becomes so commoditised that almost every self-respecting company could develop one internally. Even if this happens, regulations are likely to still be in place — not allowing total outsourcing of certain economic activities to models.
It is not easy to sue an LLM; a human has to take responsibility.
The recently published EU AI Act defines AI activities according to risk categories, stating that some use-cases should be totally banned and some should be restricted and undergo an approval process.
At the current state, I largely support Marc Andreessen’s argument that regulations will significantly undermine AI expansion — leaving sustained employment for core sectors of the economy such as healthcare, education, government services, defence and household services.
4. No mass unemployment yet
I think that the most dangerous occupation to be in currently is automation-like customer support representative — when each conversation is repetitive and any topic deviation is escalated to another team. If it sounds like you, get out as soon as possible. I am not so confident about the possibility that AI will significantly undermine other occupations. Solid coding skills are still required if you want to develop properly functioning software. Writers are not being massively replaced by AI but rather enhanced. And if generative AI can fully replace your design work then you were not such a good designer in the first place.
However, I don’t see upcoming mass unemployment from AI on a macro scale. Yes — previously mentioned points will lead to short-term redundancy and job losses in unregulated sectors — but these losses will be absorbed by the rest of the economy pretty quickly.
At the moment of writing this article OpenAI has 300 open vacancies; almost all of them are requiring coding expertise. If a leader of the LLM market still requires human coding it is a solid sign that human-in-the-loop is required and we are not at a stage when AI is a replacement.
Conclusion
Several studies confirmed that the largest impact of AI tools will be for people with entry-level skillsets whereas those with advanced expertise can see only negligible improvement.
For unregulated sectors job markets will become more saturated as entry points for beginner-level jobs will be strongly reduced. It will become more difficult to differentiate oneself from others in a job market. Reduction of average salary for entry- to mid-level employees is a strong risk.
Those with the most advanced skillset (and access to most advanced tools) will be at the top of the market where AI usage won’t be banned.
People who don’t have access to AI tools will be at significant disadvantage — widening the gap between those with and without internet access.
Our service
Do you need help with your data ecosystem? Reply directly to this email or reach out to [email protected]. More information about our service is available here.
How did we do this week? |
Was this email forwarded to you? Subscribe here.