![]() The EU AI Act is officially law, and its impact won’t be confined to European borders. If history is any guide, we’re watching the early stages of what some are calling “The Brussels Effect 2.0.” Just as GDPR reshaped global data privacy standards, the AI Act is poised to redefine how companies build, govern, and scale artificial intelligence. But this isn’t just about compliance. It’s about strategy. Companies that treat the AI Act as a bureaucratic nuisance will play catch-up. The smart ones—those that start aligning their models, governance, and transparency practices now—stand to gain a lasting edge. Why? Because EU standards have a way of becoming global defaults, whether or not your business is based in Brussels, Boston, or Bangalore. In this piece, I unpack:
📖 Read the full long-form essay here on Substack: 👉 https://open.substack.com/pub/axelnewe/p/the-brussels-effect-20-building-ai
0 Comments
One of the first things I learned in high school computer class—back when we sat behind Atari 800s and green-screen Commodore PETs—was the simple but enduring truth of “GIGO”: Garbage In, Garbage Out. Feed a computer bad code or unreliable data, and you’ll get nonsense—or worse—back. That concept has stuck with me throughout my career in technology consulting, and I’m reminded of it now more than ever, watching the rise of generative AI [1].
As the father of three college-age kids, I’ve also had a front-row seat to the rise of AI-generated content in education. In the past few years, I’ve seen my kids’ papers flagged for “AI plagiarism” by tools like Turnitin, even when they hadn’t used AI at all. False positives have gone up, not down. And they’re not alone: Stanford researchers have shown that most AI detection tools produce biased and unreliable results, often penalizing non-native English speakers [2]. But this issue goes far beyond academia. The same kinds of unreliable input-output loops are creeping into business AI systems that we trust to make decisions, generate insights, or interact with customers. It’s worth asking: what happens when AI systems become just as prone to garbage outputs—not because of human error, but because the AI platforms themselves are degrading over time? The Lifecycle of AI Decay A growing number of technologists and users have observed a pattern in digital platforms that Cory Doctorow colorfully calls enshittification—a process where systems initially serve users well, then shift to prioritize monetization and control, ultimately degrading the user experience [3]. While not all AI platforms are headed there, there are some that follow a recognizable pattern:
We’re already seeing proactive efforts from the major players. Salesforce has introduced its Trust Layer to control what goes in and out of LLMs in enterprise settings, aiming to preserve security and data lineage [4]. Anthropic is embedding legal and ethical boundaries into its models through Constitutional AI, a framework inspired by democratic safeguards [5]. Apple is reportedly building private, on-device AI systems with an emphasis on user control and data sovereignty [5]. And Google has taken steps to address bias and misinformation in its Gemini and Bard platforms, adding transparency tools and content verification methods [6]. On the flipside, changes to OpenAI’s GPT models have triggered concerns about data quality, transparency, and trustworthiness [7]. But the risk isn’t limited to big names. Smaller AI vendors—especially those packaging commercial LLMs behind minimal safeguards—are already facing scrutiny. Italy’s antitrust regulator has launched an investigation into DeepSeek, a lesser-known AI firm accused of misleading users with hallucinated results and inadequate disclosures [8]. As these examples show, reputational fallout comes fast when systems are rushed to market without enough rigor. What Can Go Wrong Already Has There are now real-world examples of business AI going sideways:
As Gartner notes, organizations that fail to validate AI outputs and establish clear accountability frameworks will see risk exposure increase sharply by 2026 [9]. What We Can Do About It This isn’t a call to panic. It’s a call to remember what we already know:
We should also support efforts to standardize ethical AI practices—whether that’s through internal governance, emerging industry certifications, or broader coalitions like the AI Incident Database [10]. A Final Thought AI isn’t magic. It’s code plus data plus business pressure. And unless we actively push back on the drift toward degraded, opaque, and lock-in-heavy models, we’ll see the same GIGO cycle play out—only this time with greater stakes. I think it’s worth reflecting on what we, as professionals, owe our organizations and ourselves when it comes to building trustworthy AI. It’s not just about technical excellence. It’s about remembering that clarity, honesty, and context matter—even if the tools we use speak in fluent paragraphs. Note! I’m not a policy maker or an AI ethicist—but I’ve worked long enough in consulting, systems integration, and data-driven programs to know what can go wrong. My next post will explore what professionals like us can actually do to protect our organizations—and our clients—from the risks we’re just beginning to understand. Sources Cited [1] AI – Artificial Intelligence – at Davos 2024: What to Know – World Economic Forum (January 14, 2024) https://www.weforum.org/stories/2024/01/artificial-intelligence-ai-innovation-technology-davos-2024/ [2] Turnitin admits frequent false positives when AI writing is below 20%. K12 Dive. June 7, 2023. https://www.k12dive.com/news/turnitin-false-positives-AI-detector/652221/ [3] What Is AI Enshittification? A New Term for a Growing Problem, VentureBeat https://venturebeat.com/ai/what-is-ai-enshittification-a-new-term-for-a-growing-problem [4] Inside the Einstein Trust Layer – Salesforce Developers Blog https://developer.salesforce.com/blogs/2023/10/inside-the-einstein-trust-layer [5] How Anthropic’s “Constitutional AI” teaches values through model feedback, Anthropic News (Dec 15, 2022) https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback [6] Responsible AI: Our 2024 report and ongoing work, Google Blog https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work [7] Is ChatGPT getting worse over time? Study claims yes—but others aren’t sure, Ars Technica https://arstechnica.com/information-technology/2023/07/is-chatgpt-getting-worse-over-time-study-claims-yes-but-others-arent-sure/ [8] IItaly regulator probes DeepSeek over false information risks, Reuters (June 16, 2025) https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/ [9] Gartner predicts 30% of generative AI projects will be abandoned after proof-of-concept due to poor data quality and lack of trust – AI Business, citing Gartner (August 1, 2024) https://aibusiness.com/automation/gartner-predicts-30-of-generative-ai-initiatives-will-fail-by-2025 [10] AI Incident Database – a shared space to report, track, and learn from real-world AI failures and harms https://incidentdatabase.ai Artificial Intelligence (AI) is no longer hype—it’s business-critical. From predictive analytics to generative copilots, the technology is being rapidly adopted across industries. But as AI enters the mainstream, so do those eager to cash in on the buzz. Lately, it seems like everyone is an “AI specialist”—from seasoned engineers to those who barely scratched the surface of a Coursera course. And so I had to ask: Is this real? After some heavy reading, coursework, and hands-on exploration—including building AI agents and studying governance frameworks—the answer is clear: yes, the flood of self-appointed AI experts is an actual phenomenon—and it poses real risks. In my research for this blog entry, I came across consistent warnings from thought leaders and analyst firms about a growing divide between those who can speak fluently about AI and those who can actually implement it. Gartner has described this as "AI washing"--the rebranding of traditional services with AI terminology without delivering substantive capabilities. McKinsey’s latest AI report noted that while adoption is up, many companies struggle to scale beyond pilots. Forbes and TechTarget have also highlighted how many consultants focus more on storytelling than real delivery. Even among some bona fide integrators, there appears to be a growing pattern: some talk fluently about AI value and go-to-market motion, yet have no track record of actual implementations, products, or agent-based design. What’s Driving the Rise of AI Charlatans? The explosion of generative AI tools like ChatGPT (OpenAI), GitHub Copilot (GitHub & OpenAI) and Claude (Anthropic) has lowered the barrier to entry for AI conversations—but not necessarily for implementation. This has created an ecosystem where “AI strategists” can thrive on surface-level knowledge while appearing credible to non-technical stakeholders. Three Types of AI Actors In my readings, I found there to be three broad types of AI professionals: The Charlatan Talks fluently about AGI, LLMs, and “transforming the enterprise,” but can’t explain what an embedding is or how to vet a dataset. Often lacks hands-on experience, and overuses hype words with little substance. The Business-Aligned Generalist Understands their domain (e.g., marketing, supply chain, compliance) and how AI can improve it, but doesn’t build models or own architectures. Perfectly credible—so long as they stay in their lane. The Practitioner This is the engineer, data scientist, or technical leader who has designed, deployed, and evaluated AI systems. They understand model limitations, governance risks, and system integration challenges. What Businesses Should Watch Out For
How to Separate the Real from the Pretend
Final Thoughts AI is powerful—but only when implemented responsibly. And while I’m no AI guru, I’ve been around long enough to see a few tech trends come and go. This one felt different. The noise, the hype, the rush to claim expertise—it compelled me to dig in, do some research, and understand what’s real and what’s not. What I found was clear: businesses need to be just as discerning about who they trust with AI as they are about the technology itself. So yes, beware the AI imposter. Ask hard questions. And if someone can’t explain how a model helps your business, it’s probably time to move on. Sources
These readings helped me understand this so I could more coherently write this blog:
|
AuthorAxel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time. ArchivesCategories
All
|