AXEL NEWE
  • Home
  • About Me
  • Work History
  • My Portfolio
    • Civic Engagement
    • Professional Thought Leadership
    • Trainings, Learnings, and Certifications
  • My Blog
  • Photo Album
  • Links and Affiliations
  • Contact

From the Field: Thoughts on Growth, Tech, Democracy & Life

The Brussels Effect 2.0: Why the EU AI Act Matters Far Beyond Europe

6/28/2025

0 Comments

 
Picture
The EU AI Act is officially law, and its impact won’t be confined to European borders. If history is any guide, we’re watching the early stages of what some are calling “The Brussels Effect 2.0.” Just as GDPR reshaped global data privacy standards, the AI Act is poised to redefine how companies build, govern, and scale artificial intelligence.

But this isn’t just about compliance. It’s about strategy.

Companies that treat the AI Act as a bureaucratic nuisance will play catch-up. The smart ones—those that start aligning their models, governance, and transparency practices now—stand to gain a lasting edge. Why? Because EU standards have a way of becoming global defaults, whether or not your business is based in Brussels, Boston, or Bangalore.

In this piece, I unpack:

  • What the AI Act actually requires (in plain language)
  • How its phased rollout will affect global AI builders
  • Why designing for regulation can become a competitive moat
  • And how U.S. companies can lead by preparing, not reacting

📖 Read the full long-form essay here on Substack:
👉 https://open.substack.com/pub/axelnewe/p/the-brussels-effect-20-building-ai

0 Comments

Agentic AI in Healthcare—What’s Real, What’s Hype, and Why It Matters

6/3/2025

2 Comments

 
Agentic AI—systems that act on data rather than just analyze it—is being hailed as a cure-all for the healthcare industry’s inefficiencies. Payers, providers, and pharma firms are investing fast. But how much of what’s being promised is actually feasible today, and how much is branding-driven hype?

In a new white paper, I explore the advertised, actual, and emerging uses of agentic AI in healthcare. From Salesforce’s acquisition of Informatica to UK-based “AI” firms exposed for running smoke-and-mirrors operations, it’s clear that the field needs clarity—and accountability.

This blog provides a preview of what you’ll find in that deeper dive.

​What’s Being Promised:

  • Automated prior authorizations and appeals in payer operations
  • Clinical documentation and decision support
  • Ambient listening for providers
  • AI agents in trial site selection and adverse event detection for pharma

What’s Working Now:

  • Clinical ambient tools (Nuance DAX, Abridge, Suki) are helpful but still human-in-the-loop
  • Automation of narrow workflows like intake triage or formulary lookups
  • Some trial modeling, where data quality is strong and regulatory paths are clear

The Gap:

  • APIs struggle with fragmented or unclean data
  • Regulatory bodies still require auditability and explainability
  • Firms overpromising agentic autonomy rarely disclose how brittle their models are

Consulting Firms: The Connective Tissue
It’s not just product companies shaping this space. Many consulting firms—Cognizant, Deloitte, EPAM, Accenture, Slalom, and others—play a unique hybrid role. They may:

  • Build their own agentic tools (sometimes as ISVs)
  • Partner with platform vendors like Salesforce or AWS
  • Drive real-world implementation and integration of AI in healthcare settings

Far from adding confusion, these firms often bring much-needed structure, compliance rigor, and domain context. They’re helping AI move from lab demo to daily workflow.

Case in Point:

British “AI” firm Repliq was exposed by the Financial Times for passing off manual processes as generative AI, with junior developers writing responses behind the scenes. It was a textbook case of vaporware wrapped in buzzwords.

Read the White Paper:
The companion white paper explores:

  • Why Informatica fills a critical data hygiene gap in Salesforce’s Data Cloud
  • How to spot poser AI companies (regardless of whether they’re vendors or services firms)
  • The regulatory drag on agentic deployment in clinical vs. administrative workflows
  • Who’s actually building credible healthcare AI—and how to tell

Conclusion:
AI won’t save healthcare overnight. But real, responsible agentic AI—built on clean data, governed properly, and validated openly—can still move the needle. We just have to know where to look.

Read more: 
Get the Full White Paper - Agentic AI in Healthcare: Sorting Real Innovation from Vaporware
2 Comments

The Restoration Trap Part II: Three Days in July – What the French Revolution of 1830 Teaches Us About Resistance

5/27/2025

0 Comments

 
In Part I, I drew the historical line between Charles X of France and Donald Trump. Both rose to power on promises of restoration. Both alienated legislatures. Both flirted with silencing dissent. Charles went too far. Trump might, too.

So what happened when Charles X crossed the line? The answer lies in events that started 26 July 1830.

​Charles issued a set of repressive orders known as the July Ordinances, which:

  • Suspended freedom of the press
  • Dissolved the progressive legislature
  • Changed election laws to favor loyalists (Alpha History)

By the morning of July 27, Parisians revolted. Workers, students, and even some middle-class citizens took to the streets. What followed wasn’t a chaotic civil war—but a highly focused push to defend civil rights and constitutional government.

Despite personal risk,  the media took the lead in keeping citizens of France informed and helped kick off the revolution. Tradesmen, workers and merchants followed suit. Charles abdicated, fled to Britain, and the monarchy was replaced (briefly) by a constitutional regime.

What can that teach us?

Resisting Autocracy Doesn't Require Violence
The July Revolution worked not because it burned everything down, but because it focused on defending institutions, not destroying them. The press played a critical role. So did moderate politicians who refused to accept illegal decrees.
​
Today, we’re not facing royal ordinances, but we are looking at:

  • Plans to dismantle civil service protections (via Project 2025)
  • Open threats to prosecute political opponents
  • Legal theories that place the president above the law (Heritage Foundation, 2024)

The Power of Civil Society
In 1830 France, it was the teachers, printers, municipal workers--not just elites—who resisted. They refused to implement illegal orders, slowed down compliance, and gave people space to act.
Here in the U.S., we’ll need:

  • Lawyers and judges who uphold the law, even under pressure
  • Journalists who don’t flinch when the subpoenas arrive
  • Public servants who know that democracy is in their job description

Final Thought: The Resistance Is Already Here
If President Trump continues to try to govern like Charles X, the institutions that survive will be the ones willing to say "no"—even when it’s hard. The American republic won’t be saved by spectacle. It will be saved by professionals, institutional guardians, people who know their history and hopefully the rest of us.

The July Revolution was three days. But its effects rippled across Europe.

Let’s learn something from it.

Sources & Citations:
  • Alpha History – July Ordinances and Revolution
  • Encyclopedia of Revolutions of 1848 – France
  • Heritage Foundation – Unitary Executive Theory
  • Project 2025 Blueprint
  • Washington Post – Trump's Second-Term Blueprint​
0 Comments

The AI Takeover That Isn’t (Yet?)

5/16/2025

0 Comments

 
Many have seen this headline, or one similar to it: “AI is coming for your job.” “White-collar work will never be the same.” Perhaps you read a recent article in the The Wall Street Journal suggesting that executives are starting to treat their staff differently, less as individuals, more as placeholders, due to AI. That is the article that prompted me to want to research this topic a bit better.

Here’s the thing: Many of us work in industries where AI is increasingly embedded into the products and services we market, sell, and deliver. I’m not an AI architect or engineer; I’ve taken training for various platforms, sat through demos, and worked on go-to-market and sales strategies for solutions that claim to harness its power.

What I’ve consistently seen—across clients, partners, and internal teams—isn’t a rush to replace people. It’s a push to equip them: streamline manual tasks, speed up decision-making, improve customer targeting, manage regulatory concerns, and reduce operational drag. But you wouldn’t know that from the headlines—or, increasingly, from the way some executives are talking. Instead of treating AI like a versatile toolset, they’re wielding it like a blunt instrument.

The Research: Are Jobs Really Being Replaced?
Based on my reading up on this topic, there’s plenty of credible research suggesting that the AI jobs apocalypse just isn’t materializing—at least not yet.

According to a March 2024 report from the OECD, while AI is expected to transform 27% of all jobs in member countries, actual job displacement due to AI has been limited so far. The report finds more evidence of task augmentation than outright replacement.

A 2023 study from MIT’s Work of the Future Initiative found similar patterns: AI is automating specific tasks, not entire occupations. Think of AI drafting a first version of a document for a marketer or helping glean information from medical records, not replace the nurse doing it.

Even Goldman Sachs, whose 2023 report sparked many headlines claiming “300 million jobs could be impacted,” clarified that most of the change will occur through task transformation, not layoffs.
So if the data shows minimal job loss so far, what are people seeing?

What’s Really Happening on the Ground?
Some industries are using AI to reduce labor costs—most notably:
  • Customer service: Major banks and telcos have replaced call center agents with AI chatbots for routine issues.
  • Media: A few digital newsrooms are experimenting with AI-generated content at scale (with mixed results).
  • Coding & Testing: Some tech firms use tools like GitHub Copilot to reduce the need for junior developers.
But even here, companies are reallocating labor rather than laying off en masse. The goal is faster and better service, not zero humans.

In contrast, in healthcare, consulting, legal, and B2B SaaS, AI is primarily used as an efficiency tool—streamlining research, customizing recommendations, or automating reporting. People are still central to the process.

So Why the Dystopian Mood?
It comes down to how leaders choose to use AI. The WSJ article I previously mentioned makes a compelling argument in one respect: some executives are viewing AI as a way to “restructure” and shift power dynamics. Not because the tech requires it, but because it creates a convenient excuse.

This isn’t about AI replacing jobs. It’s about leaders trying to justify doing what they already wanted to do—cutting headcount, reducing costs, or removing friction points between management and labor. AI provides them plausible deniability.

That’s not inevitable. It’s a choice.

What We Should Be Asking
The real question isn’t “Will AI take my job?” It’s “How will my role change, and will leadership reinvest those gains in people?”

Because another wave is coming: one where AI actually enables new roles, such as AI ethicists, customer journey designers, and model auditors. But none of that happens if the mindset is, “How can we get rid of people?”

If you’re in the trenches, you know: the most potent use of AI is when it helps people do their jobs better, not vanish them.
​
AI dystopia should not become a self-fulfilling prophecy.

Source Material
1.     OECD (2024) – The Impact of AI on the Labour Market: What We Know So Far
https://www.oecd.org/employment/impact-of-ai-on-jobs-2024.pdf

2.     MIT Work of the Future (2023) – Exploring the Future of Work with Generative AI
https://workofthefuture.mit.edu

3.     Goldman Sachs (2023) – The Potentially Large Effects of Artificial Intelligence on Economic Growth
https://www.goldmansachs.com/intelligence/pages/ai-could-boost-global-gdp.html
​

4.     Wall Street Journal (May 2025) – Companies Are Starting to Treat Workers Differently Because of AI
(Note: This article sits behind a paywall. I have Apple News, so I read it there.)
0 Comments

Fake It ’Til You Build It? The Rise of the AI Imposter

5/5/2025

0 Comments

 
Artificial Intelligence (AI) is no longer hype—it’s business-critical. From predictive analytics to generative copilots, the technology is being rapidly adopted across industries. But as AI enters the mainstream, so do those eager to cash in on the buzz. Lately, it seems like everyone is an “AI specialist”—from seasoned engineers to those who barely scratched the surface of a Coursera course. And so I had to ask: Is this real?
​

After some heavy reading, coursework, and hands-on exploration—including building AI agents and studying governance frameworks—the answer is clear: yes, the flood of self-appointed AI experts is an actual phenomenon—and it poses real risks. In my research for this blog entry, I came across consistent warnings from thought leaders and analyst firms about a growing divide between those who can speak fluently about AI and those who can actually implement it. Gartner has described this as "AI washing"--the rebranding of traditional services with AI terminology without delivering substantive capabilities. McKinsey’s latest AI report noted that while adoption is up, many companies struggle to scale beyond pilots. Forbes and TechTarget have also highlighted how many consultants focus more on storytelling than real delivery. Even among some bona fide integrators, there appears to be a growing pattern: some talk fluently about AI value and go-to-market motion, yet have no track record of actual implementations, products, or agent-based design.
What’s Driving the Rise of AI Charlatans?
The explosion of generative AI tools like ChatGPT (OpenAI), GitHub Copilot (GitHub & OpenAI) and Claude (Anthropic) has lowered the barrier to entry for AI conversations—but not necessarily for implementation. This has created an ecosystem where “AI strategists” can thrive on surface-level knowledge while appearing credible to non-technical stakeholders.
Three Types of AI Actors
In my readings, I found there to be three broad types of AI professionals:
The Charlatan
Talks fluently about AGI, LLMs, and “transforming the enterprise,” but can’t explain what an embedding is or how to vet a dataset. Often lacks hands-on experience, and overuses hype words with little substance.
The Business-Aligned Generalist
Understands their domain (e.g., marketing, supply chain, compliance) and how AI can improve it, but doesn’t build models or own architectures. Perfectly credible—so long as they stay in their lane.
The Practitioner
This is the engineer, data scientist, or technical leader who has designed, deployed, and evaluated AI systems. They understand model limitations, governance risks, and system integration challenges.
What Businesses Should Watch Out For
  • No Implementation Trail: True AI experts (and the organizations they work for) can point to past deployments—even prototypes, accelerators, or even products—and speak fluently about tradeoffs.
  • Governance Blind Spots: If your AI “consultant” isn’t discussing model risk, compliance, or ethical use, that’s a red flag. As both Gartner and McKinsey note, responsible AI requires governance frameworks, model monitoring, and risk mitigation plans.
  • Buzzword Bingo: Real practitioners talk about model evaluation, API latency, and fine-tuning parameters—not just “hyper-automation” or “GPT-fueled revolution.”
  • No Business Context: AI projects must be mapped to measurable business outcomes. If someone can’t tie model performance to process improvement, customer impact, or ROI, they’re selling magic, not solutions.
How to Separate the Real from the Pretend
  • Ask About Projects: What did they build? For whom? What were the tradeoffs?
  • Check for Cross-Functional Fluency: Can they talk with both engineers and executives?
  • Demand Accountability: Do they push for governance boards, pilot phases, and clear success metrics?
Final Thoughts
​AI is powerful—but only when implemented responsibly. And while I’m no AI guru, I’ve been around long enough to see a few tech trends come and go. This one felt different. The noise, the hype, the rush to claim expertise—it compelled me to dig in, do some research, and understand what’s real and what’s not. What I found was clear: businesses need to be just as discerning about who they trust with AI as they are about the technology itself. So yes, beware the AI imposter. Ask hard questions. And if someone can’t explain how a model helps your business, it’s probably time to move on.
Sources
These readings helped me understand this so I could more coherently write this blog:
  • Gartner. "AI Washing: How to Spot and Avoid It." 2023. https://www.gartner.com/en/articles/ai-washing-how-to-spot-and-avoid-it
​
  • McKinsey & Company. "The State of AI in 2023." https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023

  • Forbes. "AI Consulting Is Booming — But Expertise Isn’t Always Part of the Deal." 2023.

  • TechTarget. "How to Avoid AI-Washing and Choose the Right AI Partner." 2023.
0 Comments

    Author

    Axel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time.

    Archives

    June 2025
    May 2025
    April 2025

    Categories

    All
    AI
    AI Ethics
    AI Imposters
    AI Lifecycle
    America First
    American Democracy
    American History
    Autocracy
    Bike Industry
    Budget & Spending
    Business Strategy
    Career
    Chinese Bike Tech
    Civic Action
    Civil Liberties
    Compliance
    Constitutional Law
    Constitutional Rights
    CPI 2024
    Critical Thinking
    Culture & Society
    Cycling Innovation
    Cycling Life
    Data Integration
    DEI
    Democracy In Crisis
    Digital Health
    Digital Transformation
    Due Process
    Education & Policy
    Enshittification
    Enterprise AI
    Executive Power
    FinServ
    French Revolution
    FTC Non-Compete Ban
    Future Of Work
    Garbage In
    Garbage Out
    Go To Market
    Go-To-Market
    Government Accountability
    Government Ethics
    Government Reform
    Healthcare
    Healthcare Policy
    Healthcare Technology
    Health Equity
    Health IT
    Higher Education
    HIPAA
    Historical Comparison
    Historical Reflection
    HITRUST
    ICE
    Immigration & Human Rights
    Institutional Trust
    Interoperability
    Iran
    Job Search
    Law Enforcement Oversight
    Medicaid
    Medicaid And Medicare Strategy
    Medicare
    Middle East Conflict
    Military Culture
    National Security
    Necronomics
    Nuclear Diplomacy
    Parenting & Family
    Political Analysis
    Political Polarization
    Politics
    Professional Development
    Public Health
    Public Policy
    Rebuilding Trust In Politics
    Responsible Dissent
    Roman Republic And US Comparison
    SaaS
    Salesforce Strategy
    Social Contract Theory
    Technology Ethics In Care Delivery
    Technology In Business
    Transparency International
    Tribal Health
    Trump Administration
    U.S. Corruption Index
    Used Bikes
    U.S. Foreign Policy
    U.S. Navy
    Veteran Perspective
    Veterans
    Workforce Transformation

    RSS Feed

Proudly powered by Weebly
  • Home
  • About Me
  • Work History
  • My Portfolio
    • Civic Engagement
    • Professional Thought Leadership
    • Trainings, Learnings, and Certifications
  • My Blog
  • Photo Album
  • Links and Affiliations
  • Contact