AI Engineering—Structural Shift or Hype Cycle

What we’re actually talking about

AI Engineer was formally defined by Shawn Wang (Swyx) in his June 2023 essay “The Rise of the AI Engineer,” where he argued a new role was emerging between traditional software engineers and ML researchers.1 The AI Engineer builds products on top of pre-trained foundation models via APIs — focusing on product-specific data, evaluations, and user-facing integrations rather than model training. Swyx’s framing: “None of the highly effective AI Engineers I named above have done the equivalent work of the Andrew Ng Coursera courses, nor do they know PyTorch.”2

Agentic engineering was coined by Andrej Karpathy in February 2026 as the evolution beyond “vibe coding” — his earlier term for fully delegating coding to AI without scrutiny.3 Agentic engineering means orchestrating AI agents who write code under human oversight, with emphasis on quality and architectural control. Karpathy described flipping from 80% manual / 20% AI to 20% manual / 80% AI delegation in a single month.4

The core skill stack includes: LLM orchestration (LangChain, LangGraph, Vercel AI SDK), retrieval-augmented generation (RAG and Graph RAG), agent frameworks and tool use, evaluation design (evals), prompt/context engineering, and model serving infrastructure. The question is whether mastering this stack is a durable career or a transient specialization.


The case FOR: this is a real shift

The demand signal is loud

AI-related job postings increased 340% since 2024 according to LinkedIn data, while traditional SWE roles declined 15%.5 The World Economic Forum reported 1.3 million new AI-related jobs added globally in just two years.6 On Hacker News “Who’s Hiring” threads — a decent proxy for startup demand — AI is now mentioned in roughly 25% of all job posts, up from ~10% in late 2022, surpassing Python and React as the most-cited technology.7 Stanford’s AI Index found generative AI skill mentions in US job postings grew 4x year-over-year.8

This isn’t just startups chasing hype. Ninety percent of Fortune 100 companies use GitHub Copilot.9 McKinsey’s 2025 survey found 88% of organizations use AI in at least one business function.10 Gartner projects that by 2028, 90% of enterprise software engineers will use AI code assistants, up from under 14% in early 2024.11 The demand-to-supply ratio for AI engineering talent sits at approximately 3.2:1 globally.12

The salary premium is substantial and increasing at senior levels. Levels.fyi data shows AI-focused software engineers earn an 11.9% premium at the engineer level, 14.2% at senior, and 18.7% at staff — with the staff premium widening from 15.8% the prior year.13 PwC’s 2025 AI Jobs Barometer found a 56% wage premium for roles requiring AI skills versus equivalent roles without, up from 25% one year earlier.14 Anthropic lists prompt engineer roles at $250K–$375K in San Francisco.15

The models are getting genuinely good

The strongest evidence for structural shift comes from the capability trajectory. On SWE-bench Verified — a benchmark of real GitHub issues — Claude 3.5 Sonnet scored 49% in October 2024. By late 2025, six frontier models scored above 80%, within 0.8 points of each other.16 Claude Code reached an estimated $1–2 billion ARR within six months of launch.17 OpenAI’s Codex now has over one million weekly users, with usage growing 5x since January 2026.18 The AI coding tools market hit $7.37 billion in 2025, with projections of $26 billion by 2030.19

The ecosystem has matured around a recognizable stack. Harrison Chase (LangChain) defined three layers: agent frameworks (LangChain, Vercel AI SDK, CrewAI), agent runtimes (LangGraph, Temporal), and agent harnesses (Claude Agent SDK, Deep Agents SDK).20 If you’re a TypeScript/Node.js developer, the Vercel AI SDK v6 now provides first-class streaming, tool calling, and structured output generation via React Server Components — making AI integration a native part of the full-stack development loop.21

Every discipline is getting touched — some transformed

Security engineering faces the most clearly transformative shift. The OWASP Top 10 for LLM Applications 2025 defines an entirely new threat taxonomy that didn’t exist three years ago, from prompt injection to embedding poisoning to excessive agency.22 Microsoft Security Copilot’s phishing triage agent identifies 6.5x more malicious alerts than human analysts and improves verdict accuracy by 77%.23 New roles like “AI Red Team Engineer” ($160K–$230K) and “AI Security Architect” ($200K–$280K) barely existed two years ago.24 ISC2 identifies AI/ML as the #1 skill need in cybersecurity for 2026.25

DevOps/SRE is seeing measurable results from AIOps. New Relic’s 2026 AI Impact Report, based on 6.6 million users, found that AI-empowered teams deploy at up to 5x higher frequency (453 vs. 87 deployments/day) and resolve incidents 25% faster.26 PagerDuty shipped an entire AI agent suite in late 2025 including an SRE Agent for automated runbook generation.27

Platform engineering is becoming the delivery mechanism for AI across organizations. Gartner forecasts 80% platform engineering adoption by 2026.28 The Thoughtworks Technology Radar Vol 33 identifies GPU-aware orchestration as “table stakes” and MCP (Model Context Protocol) as the emerging integration standard for agentic workflows.29 The State of Platform Engineering Report Vol 4 frames the field along two axes: AI-powered platforms (augmenting IDPs with AI) and platforms for AI (GPU infrastructure, model serving).30

The 2–3 year window and the 5–10 year picture

In the tactical 2–3 year window, AI engineering skills carry asymmetric upside. The salary premium is real and growing at senior levels. The hiring demand is concrete and verifiable. Even if agentic AI enters a Gartner Trough of Disillusionment, the current wave of enterprise adoption creates 2–4 years of guaranteed demand for engineers who can build, evaluate, and secure AI-powered products. The AI Engineer World’s Fair scaled from 500 attendees in 2023 to eight events across four continents in 2026.31

The structural 5–10 year case follows the cloud pattern. Basic AI proficiency (using coding assistants, integrating LLM APIs, writing evals) will become table stakes for all software engineers — the same way basic cloud knowledge became universal by ~2015. McKinsey projects GenAI could add $2.6–$4.4 trillion annually, with software engineering among the top four value areas.32 Gartner projects that by 2030, AI-driven environments will automate 70% of routine coding tasks, fundamentally changing what “software engineer” means.33 The specialized niches that persist — inference infrastructure, AI safety/security, evaluation engineering, GPU orchestration — will command sustained premiums.


The case AGAINST: real concerns, not just FUD

The money doesn’t add up

Ed Zitron — the AI industry’s most prominent financial critic — points to a fundamental gap: global AI infrastructure spending runs approximately $400 billion annually against roughly $100 billion in enterprise AI revenue.34 OpenAI expects annual operating losses through 2028, including a projected $74 billion in operating losses in 2028 alone, while committing $1.4 trillion over eight years to data centers.35 On Claude Code specifically, Zitron calculated that subscribers consume $8–$13.50 in compute for every $1 they pay — meaning the product’s current pricing is subsidized and unsustainable at scale.36 If the infrastructure investment doesn’t yield commensurate revenue, a pullback would compress AI engineering demand.

Ray Dalio has called AI investment levels “very similar” to the dot-com era. A 2025 survey found 54% of global fund managers viewed AI stocks as bubble territory.37 Yale Insights described circular investment patterns — NVIDIA invests in OpenAI, OpenAI commits to Oracle, Oracle loses money serving OpenAI — and warned of cascading failures.38 SimpleClosure’s 2025 report documented a 2.5x year-over-year increase in Series A shutdowns, with AI wrappers “catastrophically over-represented.”39 Of 14,000+ new AI startups in 2024, approximately 40% failed within 24 months.40

The productivity gains might be a mirage

Here’s the one that really got my attention. METR ran a pre-registered randomized controlled trial with 16 experienced open-source developers across 246 real tasks — and found that AI tools (Cursor Pro with Claude 3.5/3.7) made experienced developers 19% slower, while those same developers believed they were 20% faster.41 That’s a 39-point perception gap. Over half of AI-generated code suggestions proved unusable; even accepted suggestions needed substantial manual correction.42

The NBER’s February 2026 working paper, surveying approximately 6,000 executives across the US, UK, Germany, and Australia, found that 90% of firms reported no impact on employment or productivity from AI over the previous three years.43 Apollo’s chief economist Torsten Slok called it the return of Solow’s Productivity Paradox: “AI is everywhere except in the incoming macroeconomic data.”44 PwC’s 2026 Global CEO Survey found only 12% of CEOs said AI delivered both cost and revenue benefits, while 56% saw no significant financial benefit.45

The DORA 2024 report — the gold standard for software delivery measurement, with 39,000 professionals surveyed — found that as AI adoption increased, delivery throughput decreased 1.5% and delivery stability decreased 7.2%.46 This was the second consecutive year DORA found AI worsening organizational delivery performance. The 2025 DORA report confirmed: AI users complete 21% more tasks individually and merge 98% more PRs, but organizational delivery metrics stay flat.47 As DORA team lead Nathen Harvey put it: “Generating code has rarely, if ever, been the bottleneck.”48

LLMs hit reliability walls

Gary Marcus — NYU Professor Emeritus and one of AI’s most persistent technical critics — argues that LLMs have reached a point of diminishing returns from scaling. His core claim: “There is no principled solution to hallucinations in systems that traffic only in the statistics of language without explicit representation of facts.”49 Apple’s 2025 research paper demonstrated that even “reasoning models” beyond o1 fail to generalize beyond their training distribution on classic problems like Tower of Hanoi, which Marcus called “a knockout blow for LLMs.”50 Kambhampati’s work at Arizona State showed that chains of thought produced by LLMs don’t correspond to their actual internal processing — calling into question whether “reasoning” models genuinely reason.51

The Harvard/BCG “Jagged Frontier” study — a pre-registered experiment with 758 BCG consultants — found that AI users performed 40% better on tasks inside AI’s capability frontier but were 19 percentage points worse on tasks outside it.52 The most dangerous finding: consultants couldn’t reliably distinguish which tasks fell on which side. The “fluency of the text, the confidence of the response, the apparent coherence of the reasoning” created systematic bias toward accepting plausible-looking but incorrect output.53 For production software systems, this invisible failure mode undermines the reliability case for agentic workflows.

Commoditization is already happening

Baldur Bjarnason — a 25-year web development veteran — argues that AI coding tools automate dysfunction rather than engineering excellence: “Copilot is based on the flawed myth of the 10x engineer: someone who churns out code at incredible speed, with no worries about the overall design and the end user’s needs.”54 He reports observing in production: “Inventory systems that randomly drop orders, gigantic impossible-to-review pull requests, tests that don’t test anything, dependencies that import literal malware.”55

The commoditization evidence is concrete. Prompt engineering had roughly 18 months as a legitimate technical differentiator (mid-2022 to late 2023) before models improved to make elaborate prompting unnecessary. An arxiv paper concluded that “prompt engineering may be a transitional practice rather than a permanent one.”56 RAG faces similar pressure: as context windows expand from 4K to 1M+ tokens, the core value proposition of external retrieval diminishes, and cloud providers (AWS, Azure, GCP) are building managed RAG services that abstract away implementation.57 Gartner places generative AI in the Trough of Disillusionment as of 2025, with AI agents at the Peak of Inflated Expectations — destined to enter the Trough within 2–3 years.58

The pattern echoes what one commentator noted: “By 2025, ‘AI Engineer’ was what ‘Growth Hacker’ was in 2014. The title became meaningless. Not because the role doesn’t exist — but because the bar was on the floor.”59 If AI engineering skills rapidly diffuse into the general developer population — as Stack Overflow’s 84% adoption figure suggests is already happening — the premium for “AI Engineer” as a title may erode even as the underlying skills become universal requirements.

The trust paradox

Across every major developer survey, a paradox shows up: adoption rises while trust falls. Stack Overflow 2025 shows 84% adoption but only 33% trust, with distrust rising from 31% to 46% in one year.60 Sixty-six percent of developers report the top frustration as dealing with “AI solutions almost right but not quite.”61 Forty-five percent say debugging AI-generated code is more time-consuming than writing it themselves.62 And 72% say “vibe coding” is not part of their professional work.63 Developers are using AI more while trusting it less — which tells me the technology is useful for narrow tasks but hasn’t earned confidence for autonomous workflows.


How this plays out by discipline

Web development — AI augments the stack, doesn’t replace the engineer

The TypeScript/React ecosystem has integrated AI more smoothly than any other discipline. Vercel’s AI SDK v6 provides first-class streaming, tool calling via Zod schemas, and React Server Component integration — meaning a Node.js/TypeScript developer can add AI capabilities to a Next.js app with minimal friction.64 The pattern: Server Actions call LLM → stream response → client renders via useChat/useObject hooks. New frontend skills are showing up too — chat interfaces with streaming responses, generative UI patterns, tool-calling UIs where AI agents execute functions within conversation flows, and semantic caching.65

Tools like Vercel v0, Bolt.new, Lovable, and Replit Agent are commoditizing the initial prototyping phase — v0 generates production-grade React components from natural language, and Lovable reached $20M ARR in just two months.66 But 72% of professional developers say vibe coding is not part of their professional workflow.67 These tools are great for prototypes but produce code that needs significant refactoring for production. The web development community’s consensus: AI is a powerful augmentation for existing full-stack developers, not a replacement. BLS projects web and digital interface designer roles to grow 7% through 2034.68

The concerning signal for junior web developers: Stanford’s Digital Economy Study found employment for developers aged 22–25 declined nearly 20% from peak in late 2022 to July 2025.69 Salesforce paused hiring new software engineers citing AI efficiency.70 McKinsey estimates up to 80% of programming jobs will remain human-centric, but the entry point into the profession may be narrowing.71

Security engineering — the clearest transformation

Security is the discipline where AI creates the most genuine novelty. The OWASP Top 10 for LLM Applications 2025 defines an entirely new attack surface taxonomy — prompt injection, embedding poisoning, excessive agency, system prompt leakage — that didn’t exist before 2023.72 Researchers discovered 30+ vulnerabilities across 10+ AI IDEs (Copilot, Cursor, Claude Code) resulting in 24 CVEs, dubbed “IDEsaster.”73 Up to 40% of AI-generated code contains security vulnerabilities including SQL injection, XSS, and weak authentication.74

This creates structural demand in two directions. Offensively, AI red-teaming tools like Novee, NVIDIA’s Garak (100+ attack modules), and Microsoft’s PyRIT enable new forms of adversarial testing.75 Defensively, companies like Lakera, Protect AI, and Prompt Security are building LLM-specific security platforms. Microsoft deployed OWASP-aligned prompt injection detections in Defender as of May 2025.76 Shadow AI — unapproved AI tools processing sensitive data — is growing at 120% year-over-year, creating an entirely new governance challenge.77

Only 14% of organizations believe they have sufficient AI security talent.78 With 4 million unfilled cybersecurity roles globally, AI security is one of the clearest, most defensible career specializations in the entire AI engineering landscape. The threat surface is real, novel, and expanding — this isn’t hype-driven demand.

DevOps and SRE — real gains, troubling paradox

Every major observability vendor has shipped AI features. New Relic’s 2026 AI Impact Report found AI users achieve 2x higher alert correlation rates, 27% less alert noise, and resolve incidents 25% faster during peak events.79 PagerDuty’s AI agent suite automates runbook generation, on-call scheduling, and incident summarization.80 Datadog’s Watchdog AI cut pager floods by 60% in reported case studies.81 The tooling layer is genuinely useful.

But the DORA data injects caution. Despite universal adoption of AI tools, organizational delivery metrics — deployment frequency, lead time, change failure rate, time to restore — remain stubbornly flat or worse.82 The “vacuum hypothesis” suggests that time reclaimed from AI-assisted coding gets absorbed by lower-value tasks, context switching, and reviewing AI output rather than accelerating delivery.83 The State of Incident Management 2025 reported operational toil actually rose to 30% — the first increase in five years — despite heavy AI investment.84

AI-assisted infrastructure-as-code shows practical value: one practitioner reported an LLM-powered Terraform review system that caught 14 issues in 3 weeks that would have caused production outages, reducing PR review time from 8 hours/week to 45 minutes.85 But a Cloudmagazin analysis warns of a “comprehension gap” where AI-generated IaC is faster to write than to understand, and autonomous drift remediation can’t distinguish intentional from accidental deviations.86

Platform engineering — the delivery vehicle for AI

Platform engineering has become the organizational mechanism through which AI capabilities reach developers. Backstage commands 89% market share among internal developer platforms, serving 2 million developers across 3,400+ organizations.87 The latest release (1.43) adds experimental MCP token support — positioning the developer portal as a broker for both humans and AI assistants.88 Gartner forecasts 80% platform engineering adoption by 2026, and DORA 2025 confirms 90% of organizations now have platform engineering capabilities.89

The new skill requirements for platform engineers are significant: GPU orchestration (Run:ai, Kueue, NVIDIA GPU Operator), model serving infrastructure (KServe, Ray, Triton), vector database management, and LLM observability.90 OpenAI operates 25,000 GPUs across multiple Kubernetes clusters, with custom operators handling GPU failures every 2.5 hours and maintaining 97% utilization.91 This infrastructure layer is fundamentally different systems work from traditional web-scale backend engineering.

Platform engineers now earn up to 27% more than DevOps engineers, reflecting the expanded scope.92 The discipline has bifurcated into “AI-powered platforms” (adding AI tools to existing IDPs) and “platforms for AI” (building GPU infrastructure and ML pipeline orchestration) — both demanding new skills atop existing platform expertise.93

Systems engineering and backend infra — a genuinely new domain

Inference infrastructure has created a new category of systems engineering. Production LLM serving involves heterogeneous compute orchestration, KV cache management via mechanisms like PagedAttention, continuous batching, tensor parallelism, and multi-tenant GPU scheduling — none of which map to traditional request/response backend patterns.94 vLLM achieves 14–24x higher throughput than baseline HuggingFace Transformers; TensorRT-LLM delivers 4.6x throughput improvement on H100 FP8 versus A100.95

The vector database market reached $1.97 billion in 2024, projected to hit $10.6 billion by 2032 at a 23.4% CAGR.96 But there’s a critical trend — platform absorption: MongoDB acquired Voyage AI for $220 million, Databricks acquired Neon for $1 billion, and AWS Aurora, AlloyDB, and even S3 now support vector operations natively.97 As VentureBeat noted: “Vectors are no longer a specific database type but rather a specific data type.”98 Specialized vector database engineering may commoditize faster than inference infrastructure engineering.

Backend patterns for AI applications differ from traditional work primarily in: streaming as the default response pattern (SSE/WebSocket), non-deterministic outputs requiring evaluation frameworks, cost-per-token economics demanding model routing and fallback strategies, and context window management.99 If you’re a TypeScript/Node.js developer, the Vercel AI SDK provides familiar abstractions, but the underlying operational complexity around cost optimization, provider reliability, and output evaluation adds genuine new dimensions to the backend role.

Data engineering — expanding scope, not getting subsumed

Data engineering isn’t being replaced by AI engineering — it’s expanding to encompass AI-adjacent pipelines. Traditional ETL/ELT skills (Spark, dbt, Kafka) remain essential, with new responsibilities layered on top: embedding generation, vector store population, RAG pipeline quality monitoring (retrieval precision, recall, faithfulness), and unstructured data processing at scale.100 McKinsey’s 2025 survey found 71% of organizations report regular GenAI use, but only 17% attribute more than 5% of EBIT to GenAI — the “demos to production” gap remains large and data readiness is a key bottleneck.101


The numbers

Job postings and growth rates

Metric Value Source
AI Engineer ranking on LinkedIn “Jobs on the Rise” #1 (2025 and 2026) LinkedIn102
AI-related job posting growth since 2024 +340% LinkedIn/RationalFX103
AI mentions in HN “Who’s Hiring” ~25% (May 2025), up from ~10% (Oct 2022) hntrends.com104
AI/ML/Data Science job postings, 2025 49,200 (+163% from 2024) Robert Half105
AI Engineer role growth specifically +143.2% Index.dev106
GenAI skill mentions in US postings, YoY 4x growth Stanford AI Index107
Traditional SWE role posting change -15% LinkedIn108

Salary data

Level AI Premium vs. Standard SWE Source
Entry +6.2% (down from 10.7%) Levels.fyi Q3 2025109
Mid +11.9% Levels.fyi Q3 2025110
Senior +14.2% Levels.fyi Q3 2025111
Staff +18.7% (up from 15.8%) Levels.fyi Q3 2025112
AI skills vs. same role without +56% PwC 2025113
AI Engineer average total comp $245,000 Levels.fyi114

The entry-level premium is shrinking while the staff-level premium is growing — consistent with the pattern DHH observed: “The most successful agent acceleration has been from the most senior people.”115

Developer survey data (AI adoption)

Survey Metric Value
Stack Overflow 2025 Using or planning to use AI 84%
Stack Overflow 2025 Daily AI usage 51%
Stack Overflow 2025 Trust AI accuracy 33% (only 3% “high trust”)
Stack Overflow 2025 Distrust AI accuracy 46% (up from 31%)
Stack Overflow 2025 “Vibe coding” in professional work 72% say NO
JetBrains 2025 Regular AI tool usage 85%
JetBrains 2025 Save 1+ hours/week ~90%
GitHub Octoverse 2025 Repos importing LLM SDK 1.1M+ (+178% YoY)
GitHub Octoverse 2025 New devs using Copilot in first week ~80%

Software delivery and productivity

Metric Finding Source
DORA: AI effect on throughput -1.5% DORA 2024116
DORA: AI effect on stability -7.2% DORA 2024117
METR RCT: effect on experienced devs -19% (slower) METR 2025118
METR: developer perceived effect +20% (faster) METR 2025119
GitHub Copilot: task completion speed +55.8% (single task) Peng et al. 2023120
Microsoft field experiment: PR throughput +12.9–21.8% Cui et al./MIT121
Copilot: code generated for active users 46% GitHub122
GitClear: code cloning with AI 4x increase GitClear 2024123
NBER: firms reporting productivity impact 10% (90% none) NBER 2026124

BLS projections (2024–2034)

Occupation Growth Openings/yr
Software developers +15% ~129,200125
Data scientists +34% 126
Computer/info research scientists +20% 127
Computer programmers -6% 128

BLS explicitly cites AI as both a demand driver for software developers and an automation driver for the programmer decline.129130


Historical parallels — cloud is the model, but watch for blockchain echoes

Cloud engineering (2010–2015) — the strongest parallel

Cloud followed a clear pattern: specialized hot role (2008–2012) → skills diffuse broadly (2013–2017) → new specialized niches emerge on top (SRE, platform engineering, FinOps). The timeline from AWS launch (2006) to mainstream enterprise adoption was roughly 5–7 years. Today, cloud skills are table stakes for all engineers, yet specialized cloud architecture, SRE, and platform roles command premium compensation — cloud/DevOps/platform jobs grew 35% YoY since 2023.131

AI appears to be tracking the same arc. Basic AI tool proficiency (coding assistants, LLM API integration) is already approaching ubiquity — 84% adoption in 2025. Within 3–5 years, not using AI tools will be as unusual as not using cloud services. The specialized roles that endure will be at the infrastructure layer: inference optimization, GPU orchestration, AI security, and evaluation engineering — just as SRE and platform engineering endured as cloud specializations.

Mobile development (2008–2012) — the absorption lesson

“Mobile developer” was the hottest title in tech from 2008 to 2013. Cross-platform frameworks (React Native in 2015, Flutter in 2017) then allowed web developers to build mobile apps, and the role was largely absorbed into full-stack development. Today, TypeScript/React developer listings outnumber Flutter/Dart positions 8:1 in the US.132 Native iOS/Android specialists still exist but as an increasingly niche specialization. The lesson: when abstraction layers mature, specialized roles get absorbed into adjacent generalist roles.

Blockchain (2017–2022) — the cautionary tale

Blockchain’s career trajectory was brutal. Germany saw a 94% drop in Web3 jobs from peak (22,472 to 1,256).133 Developer activity fell to 2018 levels; over half of new Web3 developers left within a year of entering the field in 2023.134 Approximately 1,000 workers left crypto for AI startups after ChatGPT’s launch.135

Three factors distinguish AI from blockchain: broader enterprise adoption (84% developer usage vs. blockchain’s limited penetration), real measurable productivity gains (even if modest), and massive physical infrastructure investment ($330B+ annually) that creates durable demand.136 But the financial structure — hundreds of billions in infrastructure investment against uncertain revenue — echoes the speculative patterns that preceded blockchain’s winter. The AI wrapper startup failure rate of 40% in under 24 months mirrors blockchain’s startup bloodbath of 2018–2019.137

Which parallel wins?

Cloud is the 70% case; blockchain is the 20% tail risk; mobile absorption is the most likely long-run shape. The technology solves real problems and has real enterprise adoption, making a full blockchain-style collapse unlikely. But the specific job title “AI Engineer” may follow mobile’s path — absorbed into general full-stack engineering within 5–7 years as AI skills become baseline expectations. The specialists who endure will be at the infrastructure and security layers, not the application integration layer.


Putting it together

Where I actually land

The shift toward AI/agentic engineering is structural but unevenly distributed, real but overhyped in magnitude, and durable in some forms but transient in others. Three things are happening at the same time:

  1. AI proficiency is becoming mandatory for all software engineers. Eighty-four percent adoption with 51% daily usage means AI tools are reaching the “default” threshold. Not learning to use them is an increasingly costly choice. Gartner projects 80% of the engineering workforce will need to upskill by 2027.138

  2. “AI Engineer” as a distinct role has real but possibly temporary value. The demand is genuine today — #1 on LinkedIn, 340% posting growth, 14–19% salary premium at senior levels. But commoditization is already visible: prompt engineering lasted ~18 months as a differentiator, RAG is being absorbed into cloud platforms, and AI wrapper startups are dying at 40% rates. The role title may peak in 2026–2028 and then dissolve into standard engineering expectations.

  3. Specific AI infrastructure and security specializations will endure. Inference optimization, GPU orchestration, AI red-teaming, evaluation engineering, and model serving involve deep systems knowledge that resists commoditization. These are the cloud-era equivalents of SRE and platform engineering — specialized roles that emerged after the initial hype cycle and persisted.

The adoption-trust paradox is the key signal

The single most important pattern in the data is that adoption and trust are moving in opposite directions. Developers are using AI tools more while trusting them less. Organizational delivery metrics worsen even as individual productivity rises. Ninety percent of firms report no macro productivity impact. This paradox resolves one of two ways: either the tools improve enough to earn trust (bullish for AI engineering as a long-term discipline), or the adoption curve flattens and partially reverses (bearish for the role’s longevity). I’d give it 2–4 years to shake out.


What to invest in if the thesis holds

If you buy the bull case — or even the moderate “cloud parallel” thesis — these are the highest-return skill investments, ordered by estimated durability:


What stays valuable no matter what

Whether AI engineering turns out to be the next cloud or the next blockchain, these skills hold their value:


Where the evidence is shaky

Several key claims in this space rest on contested or methodologically limited evidence. Worth flagging:

The MIT “95% of AI pilots fail” finding is based on 52 executive interviews with self-reported, directionally accurate estimates rather than verified financial data — Marketing AI Institute’s Paul Roetzer called the methodology “flawed.”144 The PwC claim of a 56% wage premium represents a dramatic jump from 25% in a single year and should be treated cautiously pending independent verification.145 GitHub’s claim that Copilot writes 46% of all code for active users comes from the vendor itself and hasn’t been independently audited; GitClear’s analysis of 153 million lines suggests the quality of that code is questionable.146 Gartner’s prediction of 90% AI code assistant adoption by 2028 was revised upward from 75% within one year, which may indicate forecast instability rather than genuine acceleration.147 The Solow Productivity Paradox eventually resolved — IT productivity gains materialized in the late 1990s, roughly 15–20 years after initial adoption — meaning AI’s macro productivity impact could simply be delayed rather than absent.148


My take

The evidence doesn’t support either extreme. AI engineering is neither “just hype” nor an overnight revolution that obsoletes traditional software engineering. The pattern most consistent with the data is a cloud-scale structural transition that plays out over 5–7 years, with today’s specialized “AI Engineer” role gradually dissolving into standard engineering practice while deeper infrastructure and security specializations crystallize as durable career paths.

If you’re a senior TypeScript/Node.js engineer on an identity platform, the play isn’t a wholesale career pivot — it’s a deliberate skill expansion. Learn the Vercel AI SDK and agent orchestration patterns that connect directly to your existing stack. Build evaluation and AI security expertise that compounds with your identity/auth domain knowledge. Invest in the systems-level understanding (inference infrastructure, context engineering) that resists commoditization. The tactical window for maximum career leverage is the next 2–3 years, while demand outstrips supply and the premium is widest. The structural hedge is to anchor in skills — systems thinking, security, evaluation discipline — that stay valuable whether the AI hype cycle peaks or plateaus.

The single most important insight from all of this: the developers who benefit most from AI are the ones who already understand what good software looks like. The METR study showed AI made experienced developers slower. The Harvard/BCG study showed AI made experts worse on tasks outside the frontier. The DORA data shows AI hurts organizational delivery. In every case, the failure mode is the same — humans deferring to AI on tasks that require human judgment. The engineers who thrive in the agentic era won’t be the ones who delegate most aggressively to AI. They’ll be the ones who know precisely when to trust it and when to override it. That judgment is the real skill to invest in.



  1. Swyx (Shawn Wang), “The Rise of the AI Engineer,” Latent Space, June 2023 — https://www.latent.space/p/ai-engineer↩︎

  2. Swyx (Shawn Wang), “The Rise of the AI Engineer,” Latent Space, June 2023 — https://www.latent.space/p/ai-engineer↩︎

  3. Andrej Karpathy on “Agentic Engineering,” The New Stack, February 2026 — https://thenewstack.io/vibe-coding-is-passe/↩︎

  4. Andrej Karpathy on “Agentic Engineering,” The New Stack, February 2026 — https://thenewstack.io/vibe-coding-is-passe/↩︎

  5. LinkedIn/RationalFX AI job posting data, early 2026 — https://tech-insider.org/tech-layoffs-2026-ai-workforce-impact/↩︎

  6. World Economic Forum / LinkedIn, “AI has already added 1.3 million new jobs,” January 2026 — https://www.weforum.org/stories/2026/01/ai-has-already-added-1-3-million-new-jobs-according-to-linkedin-data/↩︎

  7. Hacker News “Who is Hiring” AI mention trends — https://www.hntrends.com/↩︎

  8. Stanford HAI AI Index Report 2025 — https://spectrum.ieee.org/ai-jobs-in-2025↩︎

  9. GitHub Copilot statistics, aggregated data — https://www.quantumrun.com/consulting/github-copilot-statistics/↩︎

  10. McKinsey Global Survey on AI, 2025 — https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai↩︎

  11. Gartner Software Engineering Trends, July 2025 — https://www.gartner.com/en/newsroom/press-releases/2025-07-01-gartner-identifies-the-top-strategic-trends-in-software-engineering-for-2025-and-beyond↩︎

  12. AI Engineer compensation and demand data, multiple sources — https://www.levels.fyi/blog/ai-engineer-compensation.html↩︎

  13. Levels.fyi AI Engineer Compensation Trends Q3 2025 — https://www.levels.fyi/blog/ai-engineer-compensation-trends-q3-2025.html↩︎

  14. PwC AI Jobs Barometer, 2025 — referenced via multiple aggregators↩︎

  15. AI Engineer compensation and demand data, multiple sources — https://www.levels.fyi/blog/ai-engineer-compensation.html↩︎

  16. Anthropic SWE-bench results — https://www.anthropic.com/research/swe-bench-sonnet↩︎

  17. Anthropic Claude Code revenue analysis — https://www.uncoveralpha.com/p/anthropics-claude-code-is-having↩︎

  18. OpenAI Codex launch and adoption data — https://openai.com/index/introducing-codex/↩︎

  19. GitHub Copilot statistics, aggregated data — https://www.quantumrun.com/consulting/github-copilot-statistics/↩︎

  20. Harrison Chase (LangChain), “Agent Frameworks, Runtimes, and Harnesses” — https://blog.langchain.com/agent-frameworks-runtimes-and-harnesses-oh-my/↩︎

  21. Vercel AI SDK documentation — https://ai-sdk.dev/docs/introduction↩︎

  22. OWASP Top 10 for LLM Applications 2025 — https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/↩︎

  23. Microsoft Security Copilot SOC capabilities — https://techcommunity.microsoft.com/blog/microsoftthreatprotectionblog/security-copilot-in-defender-empowering-the-soc-with-assistive-and-autonomous-ai/4503047↩︎

  24. Emerging AI Security Roles — https://www.practical-devsecops.com/emerging-ai-security-roles/↩︎

  25. Emerging AI Security Roles — https://www.practical-devsecops.com/emerging-ai-security-roles/↩︎

  26. New Relic 2026 AI Impact Report — https://newrelic.com/press-release/20260126↩︎

  27. PagerDuty AI Agent Suite, H2 2025 — https://www.pagerduty.com/blog/product/product-launch-2025-h2/↩︎

  28. State of Platform Engineering Report Vol 4 — https://platformengineering.org/blog/announcing-the-state-of-platform-engineering-vol-4↩︎

  29. Thoughtworks Technology Radar Vol 33, November 2025 — https://www.thoughtworks.com/radar↩︎

  30. State of Platform Engineering Report Vol 4 — https://platformengineering.org/blog/announcing-the-state-of-platform-engineering-vol-4↩︎

  31. Swyx (Shawn Wang), “The Rise of the AI Engineer,” Latent Space, June 2023 — https://www.latent.space/p/ai-engineer↩︎

  32. McKinsey Global Survey on AI, 2025 — https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai↩︎

  33. Gartner Software Engineering Trends, July 2025 — https://www.gartner.com/en/newsroom/press-releases/2025-07-01-gartner-identifies-the-top-strategic-trends-in-software-engineering-for-2025-and-beyond↩︎

  34. Ed Zitron on AI economics — https://slate.com/technology/2025/02/ed-zitron-interview-big-tech-ai-criticism.html↩︎

  35. Ed Zitron on AI economics — https://slate.com/technology/2025/02/ed-zitron-interview-big-tech-ai-criticism.html↩︎

  36. Ed Zitron on AI economics — https://slate.com/technology/2025/02/ed-zitron-interview-big-tech-ai-criticism.html↩︎

  37. AI bubble analysis and financial evidence — https://en.wikipedia.org/wiki/AI_bubble and https://insights.som.yale.edu/insights/this-is-how-the-ai-bubble-bursts↩︎

  38. AI bubble analysis and financial evidence — https://en.wikipedia.org/wiki/AI_bubble and https://insights.som.yale.edu/insights/this-is-how-the-ai-bubble-bursts↩︎

  39. AI bubble analysis and financial evidence — https://en.wikipedia.org/wiki/AI_bubble and https://insights.som.yale.edu/insights/this-is-how-the-ai-bubble-bursts↩︎

  40. AI startup failure data — https://medium.com/@aiempiremedia/the-real-reason-ai-startups-are-failing-in-2026-30a4cc9fd140↩︎

  41. METR AI Productivity Study, 2025 — https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/↩︎

  42. METR AI Productivity Study, 2025 — https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/↩︎

  43. NBER Working Paper 34836, “AI and the Productivity Paradox,” February 2026 — https://www.nber.org/papers/w34836↩︎

  44. NBER Working Paper 34836, “AI and the Productivity Paradox,” February 2026 — https://www.nber.org/papers/w34836↩︎

  45. NBER Working Paper 34836, “AI and the Productivity Paradox,” February 2026 — https://www.nber.org/papers/w34836↩︎

  46. DORA 2024 Report, Google Cloud — https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report↩︎

  47. Faros AI analysis of DORA 2025 findings — https://www.faros.ai/blog/key-takeaways-from-the-dora-report-2025↩︎

  48. DORA 2024 Report, Google Cloud — https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report↩︎

  49. Gary Marcus on LLM scaling limits — https://garymarcus.substack.com/p/confirmed-llms-have-indeed-reached↩︎

  50. Gary Marcus on Apple reasoning research — https://garymarcus.substack.com/p/a-knockout-blow-for-llms↩︎

  51. Gary Marcus on LLM scaling limits — https://garymarcus.substack.com/p/confirmed-llms-have-indeed-reached↩︎

  52. Harvard/BCG “Jagged Frontier” study (Dell’Acqua et al., 2023) — https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321↩︎

  53. Harvard/BCG “Jagged Frontier” study (Dell’Acqua et al., 2023) — https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321↩︎

  54. Baldur Bjarnason, “The Intelligence Illusion” — https://illusion.baldurbjarnason.com/↩︎

  55. Baldur Bjarnason, “The Intelligence Illusion” — https://illusion.baldurbjarnason.com/↩︎

  56. Prompt engineering commoditization research — https://arxiv.org/html/2510.22251v1↩︎

  57. Prompt engineering commoditization research — https://arxiv.org/html/2510.22251v1↩︎

  58. Gartner Hype Cycle for AI 2025 — https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence↩︎

  59. AI Engineer compensation and demand data, multiple sources — https://www.levels.fyi/blog/ai-engineer-compensation.html↩︎

  60. Stack Overflow Developer Survey 2025 — https://survey.stackoverflow.co/2025/↩︎

  61. Stack Overflow Developer Survey 2025 — https://survey.stackoverflow.co/2025/↩︎

  62. Stack Overflow Developer Survey 2025 — https://survey.stackoverflow.co/2025/↩︎

  63. Stack Overflow Developer Survey 2025 — https://survey.stackoverflow.co/2025/↩︎

  64. Vercel AI SDK documentation — https://ai-sdk.dev/docs/introduction↩︎

  65. Vercel AI SDK documentation — https://ai-sdk.dev/docs/introduction↩︎

  66. AI web development tool comparison — https://technically.dev/posts/vibe-coding-tool-comparison↩︎

  67. Stack Overflow Developer Survey 2025 — https://survey.stackoverflow.co/2025/↩︎

  68. BLS Web and Digital Interface Designers outlook — https://www.bls.gov/ooh/computer-and-information-technology/↩︎

  69. Stack Overflow 2025, “AI vs. Gen Z” analysis — https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/↩︎

  70. Stack Overflow 2025, “AI vs. Gen Z” analysis — https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/↩︎

  71. Stack Overflow 2025, “AI vs. Gen Z” analysis — https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/↩︎

  72. OWASP Top 10 for LLM Applications 2025 — https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/↩︎

  73. AI IDE security research (“IDEsaster”) — https://www.hackingdream.net/2026/03/ai-penetration-testing-complete-guide-to-ai-red-teaming.html↩︎

  74. DryRun Security AI SAST analysis — https://www.dryrun.security/blog/top-ai-sast-tools-2026↩︎

  75. AI IDE security research (“IDEsaster”) — https://www.hackingdream.net/2026/03/ai-penetration-testing-complete-guide-to-ai-red-teaming.html↩︎

  76. OWASP Top 10 for LLM Applications 2025 — https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/↩︎

  77. AI IDE security research (“IDEsaster”) — https://www.hackingdream.net/2026/03/ai-penetration-testing-complete-guide-to-ai-red-teaming.html↩︎

  78. Emerging AI Security Roles — https://www.practical-devsecops.com/emerging-ai-security-roles/↩︎

  79. New Relic 2026 AI Impact Report — https://newrelic.com/press-release/20260126↩︎

  80. PagerDuty AI Agent Suite, H2 2025 — https://www.pagerduty.com/blog/product/product-launch-2025-h2/↩︎

  81. New Relic 2026 AI Impact Report — https://newrelic.com/press-release/20260126↩︎

  82. DORA 2024 Report, Google Cloud — https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report↩︎

  83. DORA 2024 Report, Google Cloud — https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report↩︎

  84. State of Incident Management 2025 — https://runframe.io/blog/state-of-incident-management-2025↩︎

  85. AI-augmented Terraform review analysis — https://insights.strivenimbus.com/articles/ai-augmented-devops-llm-terraform-review/↩︎

  86. AI-augmented Terraform review analysis — https://insights.strivenimbus.com/articles/ai-augmented-devops-llm-terraform-review/↩︎

  87. Backstage 1.43 and IDP market share — https://platformengineering.com/features/backstage-1-43-when-internal-developer-platforms-start-acting-like-platforms/↩︎

  88. Backstage 1.43 and IDP market share — https://platformengineering.com/features/backstage-1-43-when-internal-developer-platforms-start-acting-like-platforms/↩︎

  89. State of Platform Engineering Report Vol 4 — https://platformengineering.org/blog/announcing-the-state-of-platform-engineering-vol-4↩︎

  90. State of Platform Engineering Report Vol 4 — https://platformengineering.org/blog/announcing-the-state-of-platform-engineering-vol-4↩︎

  91. GPU orchestration production data — https://introl.com/blog/self-service-gpu-platforms-internal-ml-cloud-guide-2025↩︎

  92. State of Platform Engineering Report Vol 4 — https://platformengineering.org/blog/announcing-the-state-of-platform-engineering-vol-4↩︎

  93. State of Platform Engineering Report Vol 4 — https://platformengineering.org/blog/announcing-the-state-of-platform-engineering-vol-4↩︎

  94. LLM inference engine benchmarks — https://www.marktechpost.com/2025/11/07/comparing-the-top-6-inference-runtimes-for-llm-serving-in-2025/↩︎

  95. LLM inference engine benchmarks — https://www.marktechpost.com/2025/11/07/comparing-the-top-6-inference-runtimes-for-llm-serving-in-2025/↩︎

  96. Vector database market sizing — https://www.snsinsider.com/reports/vector-database-market-5881↩︎

  97. VentureBeat, “Six Data Shifts for Enterprise AI in 2026” — https://venturebeat.com/data/six-data-shifts-that-will-shape-enterprise-ai-in-2026↩︎

  98. VentureBeat, “Six Data Shifts for Enterprise AI in 2026” — https://venturebeat.com/data/six-data-shifts-that-will-shape-enterprise-ai-in-2026↩︎

  99. LLM inference engine benchmarks — https://www.marktechpost.com/2025/11/07/comparing-the-top-6-inference-runtimes-for-llm-serving-in-2025/↩︎

  100. RAG and enterprise AI data readiness — https://lakefs.io/blog/the-state-of-data-ai-engineering-2025/↩︎

  101. RAG and enterprise AI data readiness — https://lakefs.io/blog/the-state-of-data-ai-engineering-2025/↩︎

  102. World Economic Forum / LinkedIn, “AI has already added 1.3 million new jobs,” January 2026 — https://www.weforum.org/stories/2026/01/ai-has-already-added-1-3-million-new-jobs-according-to-linkedin-data/↩︎

  103. LinkedIn/RationalFX AI job posting data, early 2026 — https://tech-insider.org/tech-layoffs-2026-ai-workforce-impact/↩︎

  104. Hacker News “Who is Hiring” AI mention trends — https://www.hntrends.com/↩︎

  105. AI Engineer compensation and demand data, multiple sources — https://www.levels.fyi/blog/ai-engineer-compensation.html↩︎

  106. AI Engineer compensation and demand data, multiple sources — https://www.levels.fyi/blog/ai-engineer-compensation.html↩︎

  107. Stanford HAI AI Index Report 2025 — https://spectrum.ieee.org/ai-jobs-in-2025↩︎

  108. LinkedIn/RationalFX AI job posting data, early 2026 — https://tech-insider.org/tech-layoffs-2026-ai-workforce-impact/↩︎

  109. Levels.fyi AI Engineer Compensation Trends Q3 2025 — https://www.levels.fyi/blog/ai-engineer-compensation-trends-q3-2025.html↩︎

  110. Levels.fyi AI Engineer Compensation Trends Q3 2025 — https://www.levels.fyi/blog/ai-engineer-compensation-trends-q3-2025.html↩︎

  111. Levels.fyi AI Engineer Compensation Trends Q3 2025 — https://www.levels.fyi/blog/ai-engineer-compensation-trends-q3-2025.html↩︎

  112. Levels.fyi AI Engineer Compensation Trends Q3 2025 — https://www.levels.fyi/blog/ai-engineer-compensation-trends-q3-2025.html↩︎

  113. PwC AI Jobs Barometer, 2025 — referenced via multiple aggregators↩︎

  114. Levels.fyi AI Engineer Compensation Trends Q3 2025 — https://www.levels.fyi/blog/ai-engineer-compensation-trends-q3-2025.html↩︎

  115. DHH on agentic coding, via Pragmatic Engineer — https://newsletter.pragmaticengineer.com/p/dhhs-new-way-of-writing-code↩︎

  116. DORA 2024 Report, Google Cloud — https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report↩︎

  117. DORA 2024 Report, Google Cloud — https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report↩︎

  118. METR AI Productivity Study, 2025 — https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/↩︎

  119. METR AI Productivity Study, 2025 — https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/↩︎

  120. Peng et al., “The Impact of AI on Developer Productivity” (2023) — https://arxiv.org/abs/2302.06590↩︎

  121. Cui et al., Microsoft/Accenture field experiment — https://mit-genai.pubpub.org/pub/v5iixksv↩︎

  122. GitHub Copilot statistics, aggregated data — https://www.quantumrun.com/consulting/github-copilot-statistics/↩︎

  123. GitClear AI code quality research — https://www.gitclear.com/ai_assistant_code_quality_2025_research↩︎

  124. NBER Working Paper 34836, “AI and the Productivity Paradox,” February 2026 — https://www.nber.org/papers/w34836↩︎

  125. BLS Occupational Outlook, Software Developers — https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm↩︎

  126. BLS Occupational Outlook, Software Developers — https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm↩︎

  127. BLS Occupational Outlook, Software Developers — https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm↩︎

  128. BLS Occupational Outlook, Computer Programmers — https://www.bls.gov/ooh/computer-and-information-technology/computer-programmers.htm↩︎

  129. BLS Occupational Outlook, Software Developers — https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm↩︎

  130. BLS Occupational Outlook, Computer Programmers — https://www.bls.gov/ooh/computer-and-information-technology/computer-programmers.htm↩︎

  131. Cloud career and job growth data — https://codetocloud.io/blog/cloud-career-playbook↩︎

  132. Flutter vs. React Native job market data — https://tech-insider.org/flutter-vs-react-native-2026/↩︎

  133. Web3 jobs report and blockchain developer exodus — https://coincub.com/ranking/web3-jobs-report-2025/↩︎

  134. Web3 jobs report and blockchain developer exodus — https://coincub.com/ranking/web3-jobs-report-2025/↩︎

  135. Web3 jobs report and blockchain developer exodus — https://coincub.com/ranking/web3-jobs-report-2025/↩︎

  136. AI infrastructure investment data — referenced via multiple financial analyses↩︎

  137. AI startup failure data — https://medium.com/@aiempiremedia/the-real-reason-ai-startups-are-failing-in-2026-30a4cc9fd140↩︎

  138. Gartner Software Engineering Trends, July 2025 — https://www.gartner.com/en/newsroom/press-releases/2025-07-01-gartner-identifies-the-top-strategic-trends-in-software-engineering-for-2025-and-beyond↩︎

  139. Thoughtworks Technology Radar Vol 33, November 2025 — https://www.thoughtworks.com/radar↩︎

  140. DryRun Security AI SAST analysis — https://www.dryrun.security/blog/top-ai-sast-tools-2026↩︎

  141. GitHub Octoverse 2025 — https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/↩︎

  142. Andrej Karpathy on “Agentic Engineering,” The New Stack, February 2026 — https://thenewstack.io/vibe-coding-is-passe/↩︎

  143. Harvard/BCG “Jagged Frontier” study (Dell’Acqua et al., 2023) — https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321↩︎

  144. MIT “GenAI Divide” study, Fortune coverage — https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/↩︎

  145. PwC AI Jobs Barometer, 2025 — referenced via multiple aggregators↩︎

  146. GitClear AI code quality research — https://www.gitclear.com/ai_assistant_code_quality_2025_research↩︎

  147. Gartner Software Engineering Trends, July 2025 — https://www.gartner.com/en/newsroom/press-releases/2025-07-01-gartner-identifies-the-top-strategic-trends-in-software-engineering-for-2025-and-beyond↩︎

  148. NBER Working Paper 34836, “AI and the Productivity Paradox,” February 2026 — https://www.nber.org/papers/w34836↩︎