Forget Mass Layoffs: AI Is Quietly Sorting Winners and Losers

The robots were supposed to take our jobs. Instead, they are sorting us into winners and losers while we argue about the wrong question entirely.
For the better part of three years, the dominant anxiety about artificial intelligence in the workplace has been binary: will it replace us, or won't it? Governments have convened panels. Think tanks have published forecasts. CEOs have made pledges about “responsible deployment.” And through all of it, the conversation has orbited a single, dramatic scenario: mass displacement, a wave of redundancies, the hollowing out of the white-collar middle class.
But in March 2026, Anthropic, the San Francisco-based AI company behind the Claude family of large language models, published a piece of labour market research that quietly reframed the entire debate. Their study, “Labor market impacts of AI: A new measure and early evidence,” introduced a novel metric called “observed exposure” and used millions of real Claude interactions mapped against roughly 800 occupations in the O*NET database to measure not what AI could theoretically do to jobs, but what it is actually doing right now. The headline finding was almost anticlimactic: AI is not yet replacing jobs at scale. There has been no systematic rise in unemployment among workers in the most AI-exposed occupations.
The less comfortable finding, buried deeper in the data, was this: AI is already creating a measurable skills divide. Hiring of workers aged 22 to 25 in highly exposed occupations has dropped roughly 14 percent compared to pre-ChatGPT levels. The researchers noted this finding was “just barely statistically significant,” but the directional signal is hard to ignore. The first measurable labour market effect of generative AI is not a pink slip. It is a closed door.
And that might be worse.
The Gap Between Can and Does
Anthropic's study is notable not for what it predicts but for what it measures. Previous attempts to gauge AI's impact on employment, including the widely cited 2023 research by Eloundou and colleagues, relied on theoretical exposure: estimating whether a large language model could, in principle, make a given task at least twice as fast. By that measure, the numbers look staggering. Theoretical AI coverage for Computer and Mathematical occupations sits at 94 percent. For Office and Administrative Support roles, it is 90 percent.
But theoretical capability is not the same as economic reality. Anthropic's observed exposure metric tracks what is actually happening in professional settings by counting which tasks show sufficient work-related usage in Claude traffic, then weighting fully automated implementations at full value and augmentative use (where humans remain in the loop) at half weight. The result is a far more sober picture. In Computer and Mathematical roles, Claude currently covers just 33 percent of tasks. For the most exposed individual occupations, the figures are higher but still well below ceiling: programmers at 74.5 percent, customer service representatives at 70.1 percent, and data entry clerks at 67.1 percent.
At the other end of the spectrum, theoretical AI coverage is lowest in grounds maintenance at just 3.9 percent, followed by transportation at 12.1 percent, agriculture at 15.7 percent, food and serving at 16.9 percent, and construction at 16.9 percent. The divide is not merely between AI-proficient workers and everyone else. It is between entire categories of work that exist in fundamentally different relationships to the technology.
The gap between theoretical and observed exposure is, in a sense, the breathing room the labour market currently enjoys. But it is also a measure of latent disruption. As Anthropic's researchers note, tracking how that gap narrows over time provides a real-time indicator of economic transformation as it unfolds. The question is not whether AI can reshape these occupations. It is how quickly the observed line catches up to the theoretical one.
Anthropic's earlier Economic Index report, published in January 2026, provides additional context. That study, based on a privacy-preserving analysis of two million AI conversations split between consumer and enterprise use, found that in early 2025, 36 percent of occupations used Claude for at least a quarter of their tasks. By the time data was pooled across subsequent reports, that figure had risen to 49 percent. The trajectory is clear. What was niche behaviour a year ago is becoming standard practice for nearly half of all tracked occupations. And for the workers on the wrong side of the emerging divide, the pace of that convergence matters enormously.
Power Users and the Compounding Loop
If Anthropic's research tells us what AI is doing to the labour market in aggregate, a separate body of evidence reveals what it is doing to individual workers. And here the picture is sharper, more unequal, and considerably more troubling.
OpenAI's 2025 State of Enterprise AI report documented a sixfold productivity gap between power users and everyone else. Workers at the 95th percentile of AI adoption send six times as many messages to ChatGPT as the median employee at the same companies. For coding tasks specifically, the heaviest users engage 17 times more frequently than their typical peers. Among data analysts, the most active users employ AI data analysis tools 16 times more often than the median. Over the past year, weekly messages in ChatGPT Enterprise increased roughly eightfold, and the average worker sends 30 percent more messages than they did a year prior. Seventy-five percent of enterprise users report being able to complete entirely new tasks they previously could not perform.
The numbers translate directly into time. Workers who applied AI to seven or more distinct tasks reported saving over 10 hours per week. Those using it for fewer than three tasks reported no time savings at all. This is not a gentle gradient. It is a cliff edge.
What makes this particularly consequential is the compounding nature of the advantage. Workers who experiment broadly with AI discover more uses, which leads to greater productivity gains and better performance reviews, which leads to more interesting assignments and faster advancement, which in turn provides more opportunity and incentive to deepen AI usage further. The Debevoise Data Blog described this dynamic in January 2026 as a self-reinforcing cycle: “AI success leads to more AI success,” with early adopters developing intuitions and workflow habits that simply cannot be shortcut by intensive late-stage training. Organisations that wait until 2027 to address their AI skills gap, the analysis argued, will find themselves competing for a shrinking pool of trainable talent against firms that started building capability in 2024 and 2025. Those firms that are ahead now will find it relatively easy to stay ahead, the analysis continued, especially if they can recruit talent away from firms that have fallen behind.
Gensler's 2026 Global Workplace Survey, which polled 16,459 full-time office workers across 16 countries, adds another dimension. About 30 percent of employees now qualify as AI power users, defined as people who regularly use AI tools in both professional and personal contexts. More than half of these power users are under 40, and nearly a third are managers. These workers score significantly higher on innovation, engagement, and team relationships. They spend less time working alone (37 percent of their week versus 42 percent for late adopters) and more time learning (12 percent versus 8 percent) and socialising (11 percent versus 9 percent). Seventy percent of AI power users say learning is highly critical to their job performance. They are three times more likely to perceive their organisations as among the most innovative in the sample.
This is not the profile of someone coasting on a productivity hack. It is the profile of someone whose entire relationship to work has been restructured around a new set of capabilities, and whose career trajectory is diverging from peers who have not made the same transition.
Who Falls Behind, and Why It Is Not Random
The demographics of AI exposure complicate any simple narrative about technology helping the little guy. Anthropic's research found that workers in the most exposed professions “are more likely to be older, female, more educated, and higher-paid.” This inverts the usual pattern of technological disruption, where low-skilled, low-wage workers bear the heaviest costs. AI's first targets are not factory floors or retail counters. They are the knowledge-work occupations that have historically offered stable, well-compensated careers.
At the same time, the youth hiring slowdown suggests that the entry points to those careers are narrowing. If organisations can get 33 percent of a junior analyst's work done through an AI system, the calculus around hiring a new graduate changes. You do not necessarily fire the senior analyst. You simply do not replace the intern. The result is an invisible contraction: no layoffs, no headlines, just a quiet thinning of opportunity at the bottom of the professional ladder. As Anthropic's researchers cautioned, the young workers who are not hired may be remaining at their existing jobs, taking different jobs, or returning to education. The displacement, if that is even the right word, is diffuse and hard to track through conventional unemployment statistics.
This matters because early career experience has always been the mechanism through which workers build the skills, networks, and institutional knowledge that drive later advancement. A 22-year-old who spends two years doing data cleaning, attending meetings, and learning the rhythms of a professional environment is accumulating human capital that no online course can replicate. If AI shrinks the pool of those formative roles, the long-term consequences extend well beyond the immediate hiring numbers. It creates a generational bottleneck: not a single event, but a gradual narrowing of the pipeline through which junior talent enters and eventually rises within knowledge-work professions.
The World Economic Forum's Future of Jobs Report 2025 projected that 170 million new jobs will be created globally by 2030, while 92 million will be displaced, yielding a net gain of 78 million positions. But the same report warned that 59 percent of the global workforce will need reskilling or upskilling by 2030, and that 120 million workers face medium-term risk of redundancy if training systems fail to keep pace. The skills gap, the report noted, is the single most significant obstacle to business transformation, cited by 63 percent of employers. By 2030, 77 percent of employers plan to prioritise reskilling and upskilling their workforce to enhance collaboration with AI systems. The intent is there. Whether the execution will match the ambition is another question entirely.
The question is whether the workers who need reskilling most are the same ones who are positioned to receive it. The evidence suggests they are not.
The Training Paradox
Corporate AI training is booming. It is also, by most measures, failing.
A February 2026 DataCamp and YouGov survey of 517 business leaders in the United States and United Kingdom found that 82 percent of enterprise leaders say their organisation provides some form of AI training. And yet 59 percent of those same leaders report an AI skills gap within their workforce. Only 35 percent say they have a mature, organisation-wide upskilling programme in place. The access is there. The capability is not.
The problem, according to DataCamp's analysis, is structural. Most corporate AI training still follows a passive, course-based model: video lectures, multiple-choice assessments, completion certificates. Twenty-three percent of leaders surveyed said video-based courses make it difficult for employees to apply skills in the real world. The training exists in a vacuum, disconnected from the actual workflows where AI tools would be used. Workers complete modules and tick boxes, but the gap between knowing what a large language model is and knowing how to restructure your daily work around one remains vast.
This finding aligns with the EY 2025 Work Reimagined Survey, which polled 15,000 employees and 1,500 employers across 29 countries and found that organisations are missing up to 40 percent of potential AI productivity gains due to gaps in talent strategy. Among organisations experiencing AI-driven productivity improvements (96 percent of those investing in AI), only 17 percent reported that those gains led to reduced headcount. Far more were reinvesting in new AI capabilities (42 percent), cybersecurity (41 percent), research and development (39 percent), and employee upskilling (38 percent).
The pattern is revealing. Organisations are spending on AI training. They are not firing people because of AI. But they are also not succeeding at turning their existing workforce into proficient AI users at anything close to the speed required. The result is a two-track system within organisations: a minority of self-motivated power users who are pulling ahead, and a majority who have attended the workshops but have not fundamentally changed how they work.
McKinsey's January 2025 report on “Superagency in the workplace” put this disconnect in stark terms. While 92 percent of companies plan to increase AI investments over the next three years, only 1 percent report that they have reached what McKinsey considers AI maturity. The report also found that employees are three times more likely than leaders expect to be using generative AI for at least 30 percent of their daily work. Nearly half of C-suite leaders believe their companies are moving too slowly on AI development, citing leadership misalignment and lack of talent as the primary obstacles. The gap is not just between workers and AI. It is between what organisations think is happening with AI adoption and what is actually happening on the ground.
DataCamp's research found that organisations with mature, workforce-wide upskilling programmes are nearly twice as likely to report significant positive AI return on investment. The implication is clear: the training itself is not the bottleneck. The quality, structure, and integration of training into daily work is what separates organisations that capture AI value from those that do not. And that distinction maps uncomfortably well onto existing inequalities in corporate resources, management quality, and organisational culture.
The Wage Premium and the Widening Gulf
PwC's 2025 Global AI Jobs Barometer, which analysed close to a billion job advertisements from six continents, quantified the financial dimension of the AI skills divide. Jobs requiring AI skills now command a 56 percent wage premium over comparable roles, more than double the 25 percent premium recorded the previous year. Skills demands in AI-exposed occupations are changing 66 percent faster than in other roles, up from 25 percent the year before. And jobs requiring AI skills are growing 7.5 percent year on year, even as total job postings fell 11.3 percent.
These numbers describe an accelerating divergence. Workers who acquire and maintain AI proficiency are not just keeping pace; they are pulling away from the pack in measurable economic terms. A 56 percent wage premium is not a marginal advantage. It is the kind of differential that, compounded over a career, produces fundamentally different life outcomes: different housing, different schools for children, different retirement trajectories.
The acceleration is equally significant. When skill demands change 66 percent faster in one set of occupations than in others, the half-life of any given training investment shrinks accordingly. A worker who completes an AI literacy course in 2026 may find its content partially obsolete by 2027. This creates a treadmill effect that disproportionately burdens workers with less time, fewer resources, and less institutional support for continuous learning. It also creates a recruitment spiral. Workers with AI skills command higher salaries, which means they gravitate towards organisations that already have strong AI cultures, which further concentrates capability in firms that are already ahead.
PwC's data also contained a counterintuitive finding: productivity growth has nearly quadrupled in industries most exposed to AI, rising from 7 percent over the 2018 to 2022 period to 27 percent over 2018 to 2024 in sectors like financial services and software publishing. Jobs continue to grow even in the most easily automated roles. AI, in other words, is making people more valuable, not less. But the value accrues unevenly, and the distribution of that value tracks closely with the distribution of AI competence.
The Five-and-a-Half Trillion Dollar Question
IDC, the technology research firm, has put a price tag on the AI skills gap: $5.5 trillion in projected global economic losses by 2026, stemming from delayed products, quality issues, missed revenue, and impaired competitiveness. Over 90 percent of global enterprises, by IDC's estimate, will face critical AI skills shortages. Ninety-four percent of CEOs and CHROs identify AI as their top in-demand skill, yet only 35 percent feel they have adequately prepared their employees. Only a third of employees report receiving any AI training in the past year, even as half of employers report difficulty filling AI-related positions.
The scale of the mismatch is staggering. There are currently 1.6 million open AI positions globally, against approximately 518,000 qualified candidates, a demand-to-supply ratio of roughly 3.2 to 1. And the positions going unfilled are not niche research roles at frontier labs. They are the applied, mid-level positions where AI tools meet business operations: the prompt engineers, the automation specialists, the analysts who can bridge the gap between a model's capabilities and an organisation's needs.
The barriers to closing this gap are not mysterious. IDC's research identified the key obstacles as lack of talent (46 percent), data privacy concerns (43 percent), poor data quality (40 percent), high implementation costs (40 percent), and unclear return on investment for AI programmes (26 percent). These are not exotic challenges. They are the ordinary frictions of organisational change, amplified by the speed at which AI capabilities are advancing.
IDC projects that AI technologies themselves will eventually shave about a trillion dollars off skill-gap losses by 2027, as AI tools become more intuitive and self-service. But that still leaves trillions in unrealised value, and it assumes a level of organisational readiness that the DataCamp and EY surveys suggest is far from guaranteed.
The irony is hard to miss. The tool that is supposed to democratise knowledge work is, in its current deployment phase, concentrating advantage among those who already have the skills, resources, and institutional support to learn how to use it. AI's promise of universal empowerment remains real. Its present reality is stratification.
Structural Shift or Growing Pains
The critical question embedded in all of this data is whether the AI skills divide is a temporary adjustment, a transitional friction that will smooth out as tools improve and training catches up, or a permanent structural feature of the labour market.
The case for optimism rests on several reasonable premises. AI tools are becoming more user-friendly with each generation. Natural language interfaces have dramatically lowered the barrier to entry compared to previous waves of technology. Companies are investing heavily in training, even if current programmes are imperfect. PwC's data shows that AI is creating jobs and boosting productivity broadly, not just for an elite few. And 85 percent of organisations plan to increase their investment in upskilling employees through the period from 2025 to 2030, according to multiple industry surveys.
But the case for structural concern is stronger, and it rests on the compounding dynamics that multiple independent studies have now documented. The Debevoise analysis identified a self-reinforcing cycle where early AI adopters develop capabilities that accelerate their further adoption, creating a widening gap that late entrants cannot easily close. OpenAI's data shows a sixfold productivity differential that maps onto usage intensity. Anthropic's observed exposure metric reveals that even within occupations theoretically saturated by AI capability, actual adoption is unevenly distributed.
The OECD's 2025 report on bridging the AI skills gap acknowledged that current adult training systems “often favour those already advantaged by higher education, widening opportunity gaps.” The report recommended that governments expand incentives for AI training, improve accessibility and inclusivity, and invest in modular credentials and recognition of prior learning. These are sensible policy proposals. They are also the kind of recommendations that take years to implement and decades to show results.
Meanwhile, the compounding loop runs at the speed of quarterly performance reviews and annual promotion cycles. Every month that a power user pulls further ahead is a month that makes the gap harder to close. Every junior role that goes unfilled because AI handles part of its function is a career pathway that becomes slightly narrower. The structural argument is not that these trends are irreversible. It is that they are self-reinforcing, and that the window for intervention narrows with each passing quarter.
What Organisations Get Wrong
The most common corporate response to the AI skills divide is to treat it as a training problem. It is not. It is a management problem, a culture problem, and, increasingly, a strategic problem.
Training, as the DataCamp survey makes clear, is a necessary but insufficient condition for building AI capability. What separates organisations that successfully embed AI into their workflows from those that do not is not the availability of courses but the integration of AI tools into actual work processes, with management support, performance incentives, and tolerance for experimentation. McKinsey's superagency report found that 48 percent of employees rank training as the most important factor for AI adoption, but training alone, without the organisational scaffolding to support its application, produces graduates who know the theory but cannot implement it.
The EY survey found that 96 percent of organisations investing in AI report some productivity gains. But the distribution of those gains within organisations is wildly uneven, with a handful of power users capturing the majority of value while the broader workforce remains largely unchanged. This suggests that the barrier is not technological but organisational: the tools work, but most organisations have not restructured roles, workflows, and incentives to make broad adoption possible.
Companies that lead in AI adoption, according to OpenAI's enterprise report, enjoy 1.7 times higher revenue growth, 3.6 times greater total shareholder return, and 1.6 times higher EBIT margins compared to laggards. The correlation between AI adoption and financial performance is becoming impossible to ignore. And yet the mechanisms for spreading AI proficiency remain largely ad hoc, dependent on individual initiative rather than systematic organisational design.
This is the paradox at the heart of the AI skills divide. The technology is genuinely democratising in its potential. Anyone with access to a large language model can, in theory, perform analyses, draft documents, and automate workflows that previously required specialist expertise. But “in theory” is doing a lot of heavy lifting. In practice, the workers who extract the most value from AI are those who already possess the skills, confidence, and institutional support to experiment effectively. The tool is egalitarian. The context in which it is deployed is not.
The Policy Vacuum
Government responses to the AI skills divide have been, with some exceptions, sluggish and incremental. The OECD has called for expanded AI training incentives, improved accessibility, and investment in connected learning pathways that allow workers to move more fluidly between vocational and academic routes. The European Parliament has commissioned research on AI's role in reshaping the European workforce. The World Economic Forum continues to publish increasingly urgent reports about the scale of reskilling required.
But the gap between policy aspiration and implementation remains wide. Most OECD countries do not yet have comprehensive AI literacy programmes targeted at working adults. Funding for reskilling tends to flow through existing institutional channels, which, as the OECD itself acknowledges, “often favour those already advantaged by higher education.” The workers most at risk of falling behind are precisely the ones least served by current policy frameworks: those without degrees, without employer-sponsored training, without the time or resources for self-directed learning.
The speed mismatch is perhaps the most critical issue. AI capabilities are advancing on a timeline measured in months. Policy responses operate on a timeline measured in years, sometimes decades. By the time a government commission has completed its review, published its recommendations, secured funding, designed a programme, and enrolled its first cohort of learners, the AI landscape will have shifted beneath their feet. The skills taught in 2026 may be partially obsolete by 2028. The OECD's own recommendation for “modular credentials and recognition of prior learning” implicitly acknowledges this problem: long-form educational programmes are too slow for a technology that rewrites its own capabilities every few months.
This does not mean policy is futile. It means that policy alone cannot solve the problem. Effective responses will require coordination between governments, employers, educational institutions, and the AI companies themselves. They will require a willingness to experiment with new models of training delivery, credentialing, and workforce support. And they will require an honest reckoning with the fact that the AI skills divide is not simply a technical challenge to be solved with better courses. It is a distributional challenge that reflects, and threatens to amplify, existing structures of inequality.
What Comes Next
Anthropic's March 2026 study offered one final, underappreciated insight. The gap between theoretical and observed AI exposure is not closing uniformly across occupations. In some fields, adoption is accelerating rapidly. In others, it has barely begun. The trajectory of that convergence will determine, more than any other single factor, how deeply AI reshapes the labour market over the next five years.
If observed exposure converges slowly, there is time for training systems, policy responses, and organisational practices to adapt. Workers can build skills incrementally. Institutions can adjust. The transition, while painful, remains manageable.
If it converges quickly, as improvements in AI capability, agentic workflows, and enterprise integration suggest it might, the window for orderly adaptation shrinks dramatically. The 14 percent decline in youth hiring that Anthropic documented could become 30 percent, or 50 percent. The sixfold productivity gap between power users and everyone else could widen further. The 56 percent wage premium for AI-skilled workers could calcify into a permanent feature of the labour market, as entrenched and as difficult to reverse as any existing dimension of economic inequality.
The honest answer to whether AI's skills divide is temporary or structural is that it is both, simultaneously, and the balance between those two possibilities depends on choices being made right now, in boardrooms and government offices and training departments around the world. The technology does not predetermine the outcome. But the compounding dynamics are real, the clock is running, and the workers who are falling behind today are accumulating disadvantages that will become progressively harder to reverse.
The robots did not take the jobs. They created a new hierarchy within them. And unless something changes, that hierarchy is hardening fast.
References and Sources
Anthropic, “Labor market impacts of AI: A new measure and early evidence,” Anthropic Research, March 2026. https://www.anthropic.com/research/labor-market-impacts
Anthropic, “Anthropic Economic Index report: Economic primitives,” January 2026. https://www.anthropic.com/research/anthropic-economic-index-january-2026-report
Fortune, “Anthropic just mapped out which jobs AI could potentially replace. A 'Great Recession for white-collar workers' is absolutely possible,” March 6, 2026. https://fortune.com/2026/03/06/ai-job-losses-report-anthropic-research-great-recession-for-white-collar-workers/
Fortune, “Is AI about to take your job? New Anthropic research suggests the answer is more complicated than you think,” March 10, 2026. https://fortune.com/2026/03/10/will-ai-take-your-job-this-chart-in-an-economic-study-by-anthropic-may-give-you-a-hint-but-the-answer-is-complicated/
OpenAI, “The State of Enterprise AI: 2025 Report,” 2025. https://openai.com/index/the-state-of-enterprise-ai-2025-report/
VentureBeat, “OpenAI report reveals a 6x productivity gap between AI power users and everyone else,” 2025. https://venturebeat.com/ai/openai-report-reveals-a-6x-productivity-gap-between-ai-power-users-and
Debevoise Data Blog, “AI Advantages Tend to Compound, Increasing the Risks of Falling Too Far Behind,” January 7, 2026. https://www.debevoisedatablog.com/2026/01/07/ai-advantages-tend-to-compound-increasing-the-risks-of-falling-too-far-behind/
Gensler Research Institute, “Global Workplace Survey 2026,” 2026. https://www.gensler.com/gri/global-workplace-survey-2026
Gensler, “The Human Side of AI: What Power Users Are Telling Us About the Workplace,” 2026. https://www.gensler.com/blog/what-ai-power-users-tell-us-about-the-workplace
DataCamp and YouGov, “Companies Are Investing in AI, But Their Workforces Aren't Ready,” February 2026. https://www.datacamp.com/blog/the-ai-skills-gap-in-2026-why-most-ai-training-isn-t-translating-to-workforce-capability
EY, “AI-driven productivity is fueling reinvestment over workforce reductions,” December 2025. https://www.ey.com/en_us/newsroom/2025/12/ai-driven-productivity-is-fueling-reinvestment-over-workforce-reductions
EY, “EY survey reveals companies are missing out on up to 40% of AI productivity gains due to gaps in talent strategy,” November 2025. https://www.ey.com/en_gl/newsroom/2025/11/ey-survey-reveals-companies-are-missing-out-on-up-to-40-percent-of-ai-productivity-gains-due-to-gaps-in-talent-strategy
PwC, “The Fearless Future: 2025 Global AI Jobs Barometer,” 2025. https://www.pwc.com/gx/en/services/ai/ai-jobs-barometer.html
IDC via CIO Dive, “What's the cost of the IT skills gap? IDC says $5.5 trillion by 2026,” 2025. https://www.ciodive.com/news/tech-talent-skills-gaps-cost-trillions-idc/716523/
World Economic Forum, “Future of Jobs Report 2025,” January 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/
OECD, “Bridging the AI skills gap,” 2025. https://www.oecd.org/en/publications/bridging-the-ai-skills-gap_66d0702e-en.html
McKinsey, “Superagency in the workplace: Empowering people to unlock AI's full potential at work,” January 2025. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
HR Dive, “Anthropic: AI's influence over the labor market is only beginning to be felt,” March 2026. https://www.hrdive.com/news/anthropic-ai-influence-over-the-labor-market-jobs/814670/
TechCrunch, “The AI skills gap is here, says AI company, and power users are pulling ahead,” March 25, 2026. https://techcrunch.com/2026/03/25/the-ai-skills-gap-is-here-says-ai-company-and-power-users-are-pulling-ahead/
The Decoder, “Anthropic's new study shows AI is nowhere near its theoretical job disruption potential,” March 2026. https://the-decoder.com/anthropics-new-study-shows-ai-is-nowhere-near-its-theoretical-job-disruption-potential/
Workera, “The $5.5 Trillion Skills Gap: What IDC's New Report Reveals About AI Workforce Readiness,” 2025. https://www.workera.ai/blog/the-5-5-trillion-skills-gap-what-idcs-new-report-reveals-about-ai-workforce-readiness

Tim Green UK-based Systems Theorist & Independent Technology Writer
Tim explores the intersections of artificial intelligence, decentralised cognition, and posthuman ethics. His work, published at smarterarticles.co.uk, challenges dominant narratives of technological progress while proposing interdisciplinary frameworks for collective intelligence and digital stewardship.
His writing has been featured on Ground News and shared by independent researchers across both academic and technological communities.
ORCID: 0009-0002-0156-9795 Email: tim@smarterarticles.co.uk








