AXEL NEWE
  • Home
  • About Me
  • Work History
  • My Portfolio
    • Civic Engagement
    • Professional Thought Leadership
    • Trainings, Learnings, and Certifications
  • My Blog
  • Photo Album
  • Links and Affiliations
  • Contact

From the Field: Thoughts on Growth, Tech, Democracy & Life

Garbage In, Garbage Out: AI's Data Dilemma and the Professional's Role

6/16/2025

0 Comments

 
​One of the first things I learned in high school computer class—back when we sat behind Atari 800s and green-screen Commodore PETs—was the simple but enduring truth of “GIGO”: Garbage In, Garbage Out. Feed a computer bad code or unreliable data, and you’ll get nonsense—or worse—back. That concept has stuck with me throughout my career in technology consulting, and I’m reminded of it now more than ever, watching the rise of generative AI [1].

As the father of three college-age kids, I’ve also had a front-row seat to the rise of AI-generated content in education. In the past few years, I’ve seen my kids’ papers flagged for “AI plagiarism” by tools like Turnitin, even when they hadn’t used AI at all. False positives have gone up, not down. And they’re not alone: Stanford researchers have shown that most AI detection tools produce biased and unreliable results, often penalizing non-native English speakers [2].

But this issue goes far beyond academia. The same kinds of unreliable input-output loops are creeping into business AI systems that we trust to make decisions, generate insights, or interact with customers. It’s worth asking: what happens when AI systems become just as prone to garbage outputs—not because of human error, but because the AI platforms themselves are degrading over time?

The Lifecycle of AI Decay
A growing number of technologists and users have observed a pattern in digital platforms that Cory Doctorow colorfully calls enshittification—a process where systems initially serve users well, then shift to prioritize monetization and control, ultimately degrading the user experience [3].

While not all AI platforms are headed there, there are some that follow a recognizable pattern:
​
  1. Promising Start: Early models impress with accuracy and openness (e.g., GPT-3, Claude 1.0).
  2. Monetization Phase: Access gets restricted or paywalled. Free users get throttled or lower-quality results.
  3. Guardrails & Tuning: To avoid legal or reputational risk, platforms introduce filters, often opaque to users.
  4. Ecosystem Lock-in: Companies get tied into proprietary stacks, where auditing AI decisions is difficult.
  5. Degradation: Over-tuned models become bland, inaccurate, or biased. Trust erodes.

We’re already seeing proactive efforts from the major players. Salesforce has introduced its Trust Layer to control what goes in and out of LLMs in enterprise settings, aiming to preserve security and data lineage [4]. Anthropic is embedding legal and ethical boundaries into its models through Constitutional AI, a framework inspired by democratic safeguards [5]. Apple is reportedly building private, on-device AI systems with an emphasis on user control and data sovereignty [5]. And Google has taken steps to address bias and misinformation in its Gemini and Bard platforms, adding transparency tools and content verification methods [6].

​On the flipside, changes to OpenAI’s GPT models have triggered concerns about data quality, transparency, and trustworthiness [7]. But the risk isn’t limited to big names. Smaller AI vendors—especially those packaging commercial LLMs behind minimal safeguards—are already facing scrutiny. Italy’s antitrust regulator has launched an investigation into DeepSeek, a lesser-known AI firm accused of misleading users with hallucinated results and inadequate disclosures [8]. As these examples show, reputational fallout comes fast when systems are rushed to market without enough rigor.

What Can Go Wrong Already Has
There are now real-world examples of business AI going sideways:
​
  • AI-generated resumes and job descriptions that reflect discriminatory biases from training data.
  • Customer service chatbots giving outdated or legally risky answers (think hallucinated refund policies).
  • AI-generated code that violates licensing terms or creates vulnerabilities.
  • Marketing content that misquotes sources or fabricates statistics.

As Gartner notes, organizations that fail to validate AI outputs and establish clear accountability frameworks will see risk exposure increase sharply by 2026 [9].

What We Can Do About It
This isn’t a call to panic. It’s a call to remember what we already know:
  • Scrutinize the Inputs: Know where your AI models get their training data and who is curating it.
  • Monitor the Outputs: Validate important results. Human-in-the-loop processes still matter.
  • Diversify Your Stack: Don’t overcommit to one vendor’s AI. Maintain flexibility to pivot.
  • Document Assumptions: Keep track of model prompts, settings, and data lineage.
  • Ask Questions: Not all AI strategies are as rigorous as they claim. Transparency is a differentiator.

We should also support efforts to standardize ethical AI practices—whether that’s through internal governance, emerging industry certifications, or broader coalitions like the AI Incident Database [10].

A Final Thought
AI isn’t magic. It’s code plus data plus business pressure. And unless we actively push back on the drift toward degraded, opaque, and lock-in-heavy models, we’ll see the same GIGO cycle play out—only this time with greater stakes.

I think it’s worth reflecting on what we, as professionals, owe our organizations and ourselves when it comes to building trustworthy AI. It’s not just about technical excellence. It’s about remembering that clarity, honesty, and context matter—even if the tools we use speak in fluent paragraphs.

Note!
I’m not a policy maker or an AI ethicist—but I’ve worked long enough in consulting, systems integration, and data-driven programs to know what can go wrong. My next post will explore what professionals like us can actually do to protect our organizations—and our clients—from the risks we’re just beginning to understand.

Sources Cited
[1] AI – Artificial Intelligence – at Davos 2024: What to Know – World Economic Forum (January 14, 2024)
https://www.weforum.org/stories/2024/01/artificial-intelligence-ai-innovation-technology-davos-2024/

[2] Turnitin admits frequent false positives when AI writing is below 20%. K12 Dive. June 7, 2023.
https://www.k12dive.com/news/turnitin-false-positives-AI-detector/652221/

[3] What Is AI Enshittification? A New Term for a Growing Problem, VentureBeat
https://venturebeat.com/ai/what-is-ai-enshittification-a-new-term-for-a-growing-problem

[4] Inside the Einstein Trust Layer – Salesforce Developers Blog
https://developer.salesforce.com/blogs/2023/10/inside-the-einstein-trust-layer

[5] How Anthropic’s “Constitutional AI” teaches values through model feedback, Anthropic News (Dec 15, 2022)
https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback

[6] Responsible AI: Our 2024 report and ongoing work, Google Blog https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work

[7] Is ChatGPT getting worse over time? Study claims yes—but others aren’t sure, Ars Technica https://arstechnica.com/information-technology/2023/07/is-chatgpt-getting-worse-over-time-study-claims-yes-but-others-arent-sure/

[8] IItaly regulator probes DeepSeek over false information risks, Reuters (June 16, 2025)
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/

​
[9] Gartner predicts 30% of generative AI projects will be abandoned after proof-of-concept due to poor data quality and lack of trust – AI Business, citing Gartner (August 1, 2024) https://aibusiness.com/automation/gartner-predicts-30-of-generative-ai-initiatives-will-fail-by-2025

​[10] AI Incident Database – a shared space to report, track, and learn from real-world AI failures and harms
https://incidentdatabase.ai

0 Comments

    Author

    Axel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time.

    Archives

    June 2025
    May 2025
    April 2025

    Categories

    All
    AI
    AI Ethics
    AI Imposters
    AI Lifecycle
    America First
    American Democracy
    American History
    Autocracy
    Bike Industry
    Budget & Spending
    Business Strategy
    Career
    Chinese Bike Tech
    Civic Action
    Civil Liberties
    Compliance
    Constitutional Law
    Constitutional Rights
    CPI 2024
    Critical Thinking
    Culture & Society
    Cycling Innovation
    Cycling Life
    Data Integration
    DEI
    Democracy In Crisis
    Digital Health
    Digital Transformation
    Due Process
    Education & Policy
    Enshittification
    Enterprise AI
    Executive Power
    FinServ
    French Revolution
    FTC Non-Compete Ban
    Future Of Work
    Garbage In
    Garbage Out
    Go To Market
    Go-To-Market
    Government Accountability
    Government Ethics
    Government Reform
    Healthcare
    Healthcare Policy
    Healthcare Technology
    Health Equity
    Health IT
    Higher Education
    HIPAA
    Historical Comparison
    Historical Reflection
    HITRUST
    ICE
    Immigration & Human Rights
    Institutional Trust
    Interoperability
    Iran
    Job Search
    Law Enforcement Oversight
    Medicaid
    Medicaid And Medicare Strategy
    Medicare
    Middle East Conflict
    Military Culture
    National Security
    Necronomics
    Nuclear Diplomacy
    Parenting & Family
    Political Analysis
    Political Polarization
    Politics
    Professional Development
    Public Health
    Public Policy
    Rebuilding Trust In Politics
    Responsible Dissent
    Roman Republic And US Comparison
    SaaS
    Salesforce Strategy
    Social Contract Theory
    Technology Ethics In Care Delivery
    Technology In Business
    Transparency International
    Tribal Health
    Trump Administration
    U.S. Corruption Index
    Used Bikes
    U.S. Foreign Policy
    U.S. Navy
    Veteran Perspective
    Veterans
    Workforce Transformation

    RSS Feed

Proudly powered by Weebly
  • Home
  • About Me
  • Work History
  • My Portfolio
    • Civic Engagement
    • Professional Thought Leadership
    • Trainings, Learnings, and Certifications
  • My Blog
  • Photo Album
  • Links and Affiliations
  • Contact