AXEL NEWE
  • Home
  • About Me
  • Work History
  • My Portfolio
    • Civic Engagement
    • Professional Thought Leadership
    • Trainings, Learnings, and Certifications
  • My Blog
  • Photo Album
  • Links and Affiliations
  • Contact

From the Field: Thoughts on Growth, Tech, Democracy & Life

Fake It ’Til You Build It? The Rise of the AI Imposter

5/5/2025

0 Comments

 
Artificial Intelligence (AI) is no longer hype—it’s business-critical. From predictive analytics to generative copilots, the technology is being rapidly adopted across industries. But as AI enters the mainstream, so do those eager to cash in on the buzz. Lately, it seems like everyone is an “AI specialist”—from seasoned engineers to those who barely scratched the surface of a Coursera course. And so I had to ask: Is this real?
​

After some heavy reading, coursework, and hands-on exploration—including building AI agents and studying governance frameworks—the answer is clear: yes, the flood of self-appointed AI experts is an actual phenomenon—and it poses real risks. In my research for this blog entry, I came across consistent warnings from thought leaders and analyst firms about a growing divide between those who can speak fluently about AI and those who can actually implement it. Gartner has described this as "AI washing"--the rebranding of traditional services with AI terminology without delivering substantive capabilities. McKinsey’s latest AI report noted that while adoption is up, many companies struggle to scale beyond pilots. Forbes and TechTarget have also highlighted how many consultants focus more on storytelling than real delivery. Even among some bona fide integrators, there appears to be a growing pattern: some talk fluently about AI value and go-to-market motion, yet have no track record of actual implementations, products, or agent-based design.
What’s Driving the Rise of AI Charlatans?
The explosion of generative AI tools like ChatGPT (OpenAI), GitHub Copilot (GitHub & OpenAI) and Claude (Anthropic) has lowered the barrier to entry for AI conversations—but not necessarily for implementation. This has created an ecosystem where “AI strategists” can thrive on surface-level knowledge while appearing credible to non-technical stakeholders.
Three Types of AI Actors
In my readings, I found there to be three broad types of AI professionals:
The Charlatan
Talks fluently about AGI, LLMs, and “transforming the enterprise,” but can’t explain what an embedding is or how to vet a dataset. Often lacks hands-on experience, and overuses hype words with little substance.
The Business-Aligned Generalist
Understands their domain (e.g., marketing, supply chain, compliance) and how AI can improve it, but doesn’t build models or own architectures. Perfectly credible—so long as they stay in their lane.
The Practitioner
This is the engineer, data scientist, or technical leader who has designed, deployed, and evaluated AI systems. They understand model limitations, governance risks, and system integration challenges.
What Businesses Should Watch Out For
  • No Implementation Trail: True AI experts (and the organizations they work for) can point to past deployments—even prototypes, accelerators, or even products—and speak fluently about tradeoffs.
  • Governance Blind Spots: If your AI “consultant” isn’t discussing model risk, compliance, or ethical use, that’s a red flag. As both Gartner and McKinsey note, responsible AI requires governance frameworks, model monitoring, and risk mitigation plans.
  • Buzzword Bingo: Real practitioners talk about model evaluation, API latency, and fine-tuning parameters—not just “hyper-automation” or “GPT-fueled revolution.”
  • No Business Context: AI projects must be mapped to measurable business outcomes. If someone can’t tie model performance to process improvement, customer impact, or ROI, they’re selling magic, not solutions.
How to Separate the Real from the Pretend
  • Ask About Projects: What did they build? For whom? What were the tradeoffs?
  • Check for Cross-Functional Fluency: Can they talk with both engineers and executives?
  • Demand Accountability: Do they push for governance boards, pilot phases, and clear success metrics?
Final Thoughts
​AI is powerful—but only when implemented responsibly. And while I’m no AI guru, I’ve been around long enough to see a few tech trends come and go. This one felt different. The noise, the hype, the rush to claim expertise—it compelled me to dig in, do some research, and understand what’s real and what’s not. What I found was clear: businesses need to be just as discerning about who they trust with AI as they are about the technology itself. So yes, beware the AI imposter. Ask hard questions. And if someone can’t explain how a model helps your business, it’s probably time to move on.
Sources
These readings helped me understand this so I could more coherently write this blog:
  • Gartner. "AI Washing: How to Spot and Avoid It." 2023. https://www.gartner.com/en/articles/ai-washing-how-to-spot-and-avoid-it
​
  • McKinsey & Company. "The State of AI in 2023." https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023

  • Forbes. "AI Consulting Is Booming — But Expertise Isn’t Always Part of the Deal." 2023.

  • TechTarget. "How to Avoid AI-Washing and Choose the Right AI Partner." 2023.
0 Comments



Leave a Reply.

    Author

    Axel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time.

    Archives

    June 2025
    May 2025
    April 2025

    Categories

    All
    AI
    AI Ethics
    AI Imposters
    AI Lifecycle
    American Democracy
    American History
    Autocracy
    Bike Industry
    Budget & Spending
    Business Strategy
    Career
    Chinese Bike Tech
    Civic Action
    Civil Liberties
    Compliance
    Constitutional Law
    Constitutional Rights
    Critical Thinking
    Culture & Society
    Cycling Innovation
    Cycling Life
    Data Integration
    DEI
    Democracy In Crisis
    Digital Health
    Digital Transformation
    Education & Policy
    Enshittification
    Enterprise AI
    Executive Power
    FinServ
    French Revolution
    FTC Non-Compete Ban
    Future Of Work
    Garbage In
    Garbage Out
    Go To Market
    Go-To-Market
    Government Accountability
    Government Reform
    Healthcare
    Healthcare Policy
    Healthcare Technology
    Health Equity
    Health IT
    Higher Education
    HIPAA
    Historical Comparison
    Historical Reflection
    HITRUST
    Immigration & Human Rights
    Institutional Trust
    Interoperability
    Job Search
    Medicaid
    Medicaid And Medicare Strategy
    Medicare
    Military Culture
    National Security
    Necronomics
    Parenting & Family
    Political Analysis
    Political Polarization
    Politics
    Professional Development
    Public Health
    Public Policy
    Rebuilding Trust In Politics
    Responsible Dissent
    Roman Republic And US Comparison
    SaaS
    Salesforce Strategy
    Social Contract Theory
    Technology Ethics In Care Delivery
    Technology In Business
    Tribal Health
    Used Bikes
    U.S. Navy
    Veteran Perspective
    Veterans
    Workforce Transformation

    RSS Feed

Proudly powered by Weebly
  • Home
  • About Me
  • Work History
  • My Portfolio
    • Civic Engagement
    • Professional Thought Leadership
    • Trainings, Learnings, and Certifications
  • My Blog
  • Photo Album
  • Links and Affiliations
  • Contact