One of the first things I learned in high school computer class—back when we sat behind Atari 800s and green-screen Commodore PETs—was the simple but enduring truth of “GIGO”: Garbage In, Garbage Out. Feed a computer bad code or unreliable data, and you’ll get nonsense—or worse—back. That concept has stuck with me throughout my career in technology consulting, and I’m reminded of it now more than ever, watching the rise of generative AI [1].
As the father of three college-age kids, I’ve also had a front-row seat to the rise of AI-generated content in education. In the past few years, I’ve seen my kids’ papers flagged for “AI plagiarism” by tools like Turnitin, even when they hadn’t used AI at all. False positives have gone up, not down. And they’re not alone: Stanford researchers have shown that most AI detection tools produce biased and unreliable results, often penalizing non-native English speakers [2]. But this issue goes far beyond academia. The same kinds of unreliable input-output loops are creeping into business AI systems that we trust to make decisions, generate insights, or interact with customers. It’s worth asking: what happens when AI systems become just as prone to garbage outputs—not because of human error, but because the AI platforms themselves are degrading over time? The Lifecycle of AI Decay A growing number of technologists and users have observed a pattern in digital platforms that Cory Doctorow colorfully calls enshittification—a process where systems initially serve users well, then shift to prioritize monetization and control, ultimately degrading the user experience [3]. While not all AI platforms are headed there, there are some that follow a recognizable pattern:
We’re already seeing proactive efforts from the major players. Salesforce has introduced its Trust Layer to control what goes in and out of LLMs in enterprise settings, aiming to preserve security and data lineage [4]. Anthropic is embedding legal and ethical boundaries into its models through Constitutional AI, a framework inspired by democratic safeguards [5]. Apple is reportedly building private, on-device AI systems with an emphasis on user control and data sovereignty [5]. And Google has taken steps to address bias and misinformation in its Gemini and Bard platforms, adding transparency tools and content verification methods [6]. On the flipside, changes to OpenAI’s GPT models have triggered concerns about data quality, transparency, and trustworthiness [7]. But the risk isn’t limited to big names. Smaller AI vendors—especially those packaging commercial LLMs behind minimal safeguards—are already facing scrutiny. Italy’s antitrust regulator has launched an investigation into DeepSeek, a lesser-known AI firm accused of misleading users with hallucinated results and inadequate disclosures [8]. As these examples show, reputational fallout comes fast when systems are rushed to market without enough rigor. What Can Go Wrong Already Has There are now real-world examples of business AI going sideways:
As Gartner notes, organizations that fail to validate AI outputs and establish clear accountability frameworks will see risk exposure increase sharply by 2026 [9]. What We Can Do About It This isn’t a call to panic. It’s a call to remember what we already know:
We should also support efforts to standardize ethical AI practices—whether that’s through internal governance, emerging industry certifications, or broader coalitions like the AI Incident Database [10]. A Final Thought AI isn’t magic. It’s code plus data plus business pressure. And unless we actively push back on the drift toward degraded, opaque, and lock-in-heavy models, we’ll see the same GIGO cycle play out—only this time with greater stakes. I think it’s worth reflecting on what we, as professionals, owe our organizations and ourselves when it comes to building trustworthy AI. It’s not just about technical excellence. It’s about remembering that clarity, honesty, and context matter—even if the tools we use speak in fluent paragraphs. Note! I’m not a policy maker or an AI ethicist—but I’ve worked long enough in consulting, systems integration, and data-driven programs to know what can go wrong. My next post will explore what professionals like us can actually do to protect our organizations—and our clients—from the risks we’re just beginning to understand. Sources Cited [1] AI – Artificial Intelligence – at Davos 2024: What to Know – World Economic Forum (January 14, 2024) https://www.weforum.org/stories/2024/01/artificial-intelligence-ai-innovation-technology-davos-2024/ [2] Turnitin admits frequent false positives when AI writing is below 20%. K12 Dive. June 7, 2023. https://www.k12dive.com/news/turnitin-false-positives-AI-detector/652221/ [3] What Is AI Enshittification? A New Term for a Growing Problem, VentureBeat https://venturebeat.com/ai/what-is-ai-enshittification-a-new-term-for-a-growing-problem [4] Inside the Einstein Trust Layer – Salesforce Developers Blog https://developer.salesforce.com/blogs/2023/10/inside-the-einstein-trust-layer [5] How Anthropic’s “Constitutional AI” teaches values through model feedback, Anthropic News (Dec 15, 2022) https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback [6] Responsible AI: Our 2024 report and ongoing work, Google Blog https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work [7] Is ChatGPT getting worse over time? Study claims yes—but others aren’t sure, Ars Technica https://arstechnica.com/information-technology/2023/07/is-chatgpt-getting-worse-over-time-study-claims-yes-but-others-arent-sure/ [8] IItaly regulator probes DeepSeek over false information risks, Reuters (June 16, 2025) https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/ [9] Gartner predicts 30% of generative AI projects will be abandoned after proof-of-concept due to poor data quality and lack of trust – AI Business, citing Gartner (August 1, 2024) https://aibusiness.com/automation/gartner-predicts-30-of-generative-ai-initiatives-will-fail-by-2025 [10] AI Incident Database – a shared space to report, track, and learn from real-world AI failures and harms https://incidentdatabase.ai
0 Comments
|
AuthorAxel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time. ArchivesCategories
All
|