![]() The EU AI Act is officially law, and its impact won’t be confined to European borders. If history is any guide, we’re watching the early stages of what some are calling “The Brussels Effect 2.0.” Just as GDPR reshaped global data privacy standards, the AI Act is poised to redefine how companies build, govern, and scale artificial intelligence. But this isn’t just about compliance. It’s about strategy. Companies that treat the AI Act as a bureaucratic nuisance will play catch-up. The smart ones—those that start aligning their models, governance, and transparency practices now—stand to gain a lasting edge. Why? Because EU standards have a way of becoming global defaults, whether or not your business is based in Brussels, Boston, or Bangalore. In this piece, I unpack:
📖 Read the full long-form essay here on Substack: 👉 https://open.substack.com/pub/axelnewe/p/the-brussels-effect-20-building-ai
0 Comments
Agentic AI—systems that act on data rather than just analyze it—is being hailed as a cure-all for the healthcare industry’s inefficiencies. Payers, providers, and pharma firms are investing fast. But how much of what’s being promised is actually feasible today, and how much is branding-driven hype?
In a new white paper, I explore the advertised, actual, and emerging uses of agentic AI in healthcare. From Salesforce’s acquisition of Informatica to UK-based “AI” firms exposed for running smoke-and-mirrors operations, it’s clear that the field needs clarity—and accountability. This blog provides a preview of what you’ll find in that deeper dive. What’s Being Promised:
What’s Working Now:
The Gap:
Consulting Firms: The Connective Tissue It’s not just product companies shaping this space. Many consulting firms—Cognizant, Deloitte, EPAM, Accenture, Slalom, and others—play a unique hybrid role. They may:
Far from adding confusion, these firms often bring much-needed structure, compliance rigor, and domain context. They’re helping AI move from lab demo to daily workflow. Case in Point: British “AI” firm Repliq was exposed by the Financial Times for passing off manual processes as generative AI, with junior developers writing responses behind the scenes. It was a textbook case of vaporware wrapped in buzzwords. Read the White Paper: The companion white paper explores:
Conclusion: AI won’t save healthcare overnight. But real, responsible agentic AI—built on clean data, governed properly, and validated openly—can still move the needle. We just have to know where to look. Read more: Get the Full White Paper - Agentic AI in Healthcare: Sorting Real Innovation from Vaporware In Part I, I drew the historical line between Charles X of France and Donald Trump. Both rose to power on promises of restoration. Both alienated legislatures. Both flirted with silencing dissent. Charles went too far. Trump might, too.
So what happened when Charles X crossed the line? The answer lies in events that started 26 July 1830. Charles issued a set of repressive orders known as the July Ordinances, which:
By the morning of July 27, Parisians revolted. Workers, students, and even some middle-class citizens took to the streets. What followed wasn’t a chaotic civil war—but a highly focused push to defend civil rights and constitutional government. Despite personal risk, the media took the lead in keeping citizens of France informed and helped kick off the revolution. Tradesmen, workers and merchants followed suit. Charles abdicated, fled to Britain, and the monarchy was replaced (briefly) by a constitutional regime. What can that teach us? Resisting Autocracy Doesn't Require Violence The July Revolution worked not because it burned everything down, but because it focused on defending institutions, not destroying them. The press played a critical role. So did moderate politicians who refused to accept illegal decrees. Today, we’re not facing royal ordinances, but we are looking at:
The Power of Civil Society In 1830 France, it was the teachers, printers, municipal workers--not just elites—who resisted. They refused to implement illegal orders, slowed down compliance, and gave people space to act. Here in the U.S., we’ll need:
Final Thought: The Resistance Is Already Here If President Trump continues to try to govern like Charles X, the institutions that survive will be the ones willing to say "no"—even when it’s hard. The American republic won’t be saved by spectacle. It will be saved by professionals, institutional guardians, people who know their history and hopefully the rest of us. The July Revolution was three days. But its effects rippled across Europe. Let’s learn something from it. Sources & Citations: Many have seen this headline, or one similar to it: “AI is coming for your job.” “White-collar work will never be the same.” Perhaps you read a recent article in the The Wall Street Journal suggesting that executives are starting to treat their staff differently, less as individuals, more as placeholders, due to AI. That is the article that prompted me to want to research this topic a bit better.
Here’s the thing: Many of us work in industries where AI is increasingly embedded into the products and services we market, sell, and deliver. I’m not an AI architect or engineer; I’ve taken training for various platforms, sat through demos, and worked on go-to-market and sales strategies for solutions that claim to harness its power. What I’ve consistently seen—across clients, partners, and internal teams—isn’t a rush to replace people. It’s a push to equip them: streamline manual tasks, speed up decision-making, improve customer targeting, manage regulatory concerns, and reduce operational drag. But you wouldn’t know that from the headlines—or, increasingly, from the way some executives are talking. Instead of treating AI like a versatile toolset, they’re wielding it like a blunt instrument. The Research: Are Jobs Really Being Replaced? Based on my reading up on this topic, there’s plenty of credible research suggesting that the AI jobs apocalypse just isn’t materializing—at least not yet. According to a March 2024 report from the OECD, while AI is expected to transform 27% of all jobs in member countries, actual job displacement due to AI has been limited so far. The report finds more evidence of task augmentation than outright replacement. A 2023 study from MIT’s Work of the Future Initiative found similar patterns: AI is automating specific tasks, not entire occupations. Think of AI drafting a first version of a document for a marketer or helping glean information from medical records, not replace the nurse doing it. Even Goldman Sachs, whose 2023 report sparked many headlines claiming “300 million jobs could be impacted,” clarified that most of the change will occur through task transformation, not layoffs. So if the data shows minimal job loss so far, what are people seeing? What’s Really Happening on the Ground? Some industries are using AI to reduce labor costs—most notably:
In contrast, in healthcare, consulting, legal, and B2B SaaS, AI is primarily used as an efficiency tool—streamlining research, customizing recommendations, or automating reporting. People are still central to the process. So Why the Dystopian Mood? It comes down to how leaders choose to use AI. The WSJ article I previously mentioned makes a compelling argument in one respect: some executives are viewing AI as a way to “restructure” and shift power dynamics. Not because the tech requires it, but because it creates a convenient excuse. This isn’t about AI replacing jobs. It’s about leaders trying to justify doing what they already wanted to do—cutting headcount, reducing costs, or removing friction points between management and labor. AI provides them plausible deniability. That’s not inevitable. It’s a choice. What We Should Be Asking The real question isn’t “Will AI take my job?” It’s “How will my role change, and will leadership reinvest those gains in people?” Because another wave is coming: one where AI actually enables new roles, such as AI ethicists, customer journey designers, and model auditors. But none of that happens if the mindset is, “How can we get rid of people?” If you’re in the trenches, you know: the most potent use of AI is when it helps people do their jobs better, not vanish them. AI dystopia should not become a self-fulfilling prophecy. Source Material 1. OECD (2024) – The Impact of AI on the Labour Market: What We Know So Far https://www.oecd.org/employment/impact-of-ai-on-jobs-2024.pdf 2. MIT Work of the Future (2023) – Exploring the Future of Work with Generative AI https://workofthefuture.mit.edu 3. Goldman Sachs (2023) – The Potentially Large Effects of Artificial Intelligence on Economic Growth https://www.goldmansachs.com/intelligence/pages/ai-could-boost-global-gdp.html 4. Wall Street Journal (May 2025) – Companies Are Starting to Treat Workers Differently Because of AI (Note: This article sits behind a paywall. I have Apple News, so I read it there.) Artificial Intelligence (AI) is no longer hype—it’s business-critical. From predictive analytics to generative copilots, the technology is being rapidly adopted across industries. But as AI enters the mainstream, so do those eager to cash in on the buzz. Lately, it seems like everyone is an “AI specialist”—from seasoned engineers to those who barely scratched the surface of a Coursera course. And so I had to ask: Is this real? After some heavy reading, coursework, and hands-on exploration—including building AI agents and studying governance frameworks—the answer is clear: yes, the flood of self-appointed AI experts is an actual phenomenon—and it poses real risks. In my research for this blog entry, I came across consistent warnings from thought leaders and analyst firms about a growing divide between those who can speak fluently about AI and those who can actually implement it. Gartner has described this as "AI washing"--the rebranding of traditional services with AI terminology without delivering substantive capabilities. McKinsey’s latest AI report noted that while adoption is up, many companies struggle to scale beyond pilots. Forbes and TechTarget have also highlighted how many consultants focus more on storytelling than real delivery. Even among some bona fide integrators, there appears to be a growing pattern: some talk fluently about AI value and go-to-market motion, yet have no track record of actual implementations, products, or agent-based design. What’s Driving the Rise of AI Charlatans? The explosion of generative AI tools like ChatGPT (OpenAI), GitHub Copilot (GitHub & OpenAI) and Claude (Anthropic) has lowered the barrier to entry for AI conversations—but not necessarily for implementation. This has created an ecosystem where “AI strategists” can thrive on surface-level knowledge while appearing credible to non-technical stakeholders. Three Types of AI Actors In my readings, I found there to be three broad types of AI professionals: The Charlatan Talks fluently about AGI, LLMs, and “transforming the enterprise,” but can’t explain what an embedding is or how to vet a dataset. Often lacks hands-on experience, and overuses hype words with little substance. The Business-Aligned Generalist Understands their domain (e.g., marketing, supply chain, compliance) and how AI can improve it, but doesn’t build models or own architectures. Perfectly credible—so long as they stay in their lane. The Practitioner This is the engineer, data scientist, or technical leader who has designed, deployed, and evaluated AI systems. They understand model limitations, governance risks, and system integration challenges. What Businesses Should Watch Out For
How to Separate the Real from the Pretend
Final Thoughts AI is powerful—but only when implemented responsibly. And while I’m no AI guru, I’ve been around long enough to see a few tech trends come and go. This one felt different. The noise, the hype, the rush to claim expertise—it compelled me to dig in, do some research, and understand what’s real and what’s not. What I found was clear: businesses need to be just as discerning about who they trust with AI as they are about the technology itself. So yes, beware the AI imposter. Ask hard questions. And if someone can’t explain how a model helps your business, it’s probably time to move on. Sources
These readings helped me understand this so I could more coherently write this blog:
|
AuthorAxel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time. ArchivesCategories
All
|