On June 21, 2025 (today!), the U.S. military carried out coordinated airstrikes on three major Iranian nuclear facilities--Fordow, Natanz, and Esfahan—using B-2 stealth bombers and 30,000-pound bunker-buster bombs. President Trump confirmed the operation on Truth Social, declaring it “very successful.” (Source: Bloomberg, The War Zone, Argus Media, AP)
If that sentence feels out of sync with Trump’s well-known calls to “end endless wars,” you’re not alone. These airstrikes signal a stunning shift in policy for an administration that has consistently wrapped itself in isolationist language—promising voters an America that minds its own business, ends foreign entanglements, and focuses on domestic affairs. Now, just weeks after Israel began bombing Iran’s nuclear infrastructure, the U.S. has fully entered the fight. A Doctrine in Conflict From the campaign trail to the Oval Office, Trump has marketed himself as a reluctant warrior. He pulled troops out of Afghanistan, lambasted NATO allies for expecting too much from U.S. taxpayers, and routinely mocked prior presidents for starting wars. But Trump also authorized the 2020 killing of Iran’s General Qassem Soleimani, oversaw a record number of drone strikes, and now—barely a year into his second term—has launched the most aggressive U.S. action against Iran in decades. This isn’t restraint. It’s escalation. Why Now? Several credible outlets confirm the following:
While the administration hasn’t released a formal statement on the motive, most observers see this as a strategic alignment with Israel. The strikes were likely meant to cripple Iran’s nuclear progress before it crosses a red line—real or perceived. What Does Trump Really Risk? At first glance, not much. Trump’s political brand has always been built on contradiction without consequence. He can denounce endless wars one day and order airstrikes the next—without losing core support—because his movement isn’t built on policy consistency. It’s built on performance, dominance, and defiance. But there is potential cost here, especially from the non-interventionist right—the populist, libertarian, and nationalist voices who believed “America First” actually meant less foreign entanglement, not targeted bombing raids. Figures like Tucker Carlson, Vice President J.D. Vance, and the Freedom Caucus have long warned against becoming entangled in Middle East wars, especially on behalf of foreign interests. Now that the U.S. is backing Israel’s campaign against Iran directly, some are already framing this as Trump “doing Netanyahu’s bidding.” The phrasing may vary—Netanyahu’s lackey, proxy, or yes, even his “bitch”—but the sentiment cuts into the myth that Trump answers to no one. Here is the irony: Trump built his identity on rejecting globalist pressure and foreign influence. But this strike, no matter how surgically framed, makes him look like a co-pilot on a flight with Netanyahu piloting. And once you’re seen as someone else’s co-pilot, the brand of unilateral strength and nationalism starts to crack. A Personal Reflection As a veteran with two middle-eastern deployments under my belt, and long-time observer of U.S. foreign policy, I’ve seen the slogans come and go. What rarely changes is how easily American power gets deployed—without congressional debate, without clear strategy, and too often without end. It’s fair to ask whether this is really about defending America or defining it by force projection. Either way, we’re owed answers—and oversight. What Now? Iran has promised retaliation. Regional tensions are at a boiling point. And the U.S., once again, is in the middle of a fight that could spiral into a broader conflict. Whether you supported Trump for his nationalist agenda or not, this is a moment to pause and ask: Is this what “America First” looks like? Sources
0 Comments
Every day, more people are reporting what looks like a plainclothes kidnapping: no uniforms, no badges, just men in regular clothes picking someone up—sometimes in courthouses or jails—and leading them away in unmarked vehicles. It’s unsettling. But what you’ve likely witnessed isn’t a scene from a thriller—it’s part of a growing and highly controversial practice: ICE using private security contractors like G4S to detain immigrants. In this post, we explore how this happened, why it’s legally questionable, and how communities are pushing back.
1. Who’s Really Making the Arrest? By law, only ICE, CBP, or DOJ officers can carry out immigration arrests. But if you look online, it’s often unmarked operatives or private security contractors who are doing it—standing in for ICE without credentials or uniforms. Under 8 U.S.C. § 1357(a), nobody else is legally authorized. Are These “Bounty Hunters”? No—but It Sure Feels That Way Viral videos describe them as “bounty hunters grabbing immigrants in public,” but that’s misleading. Licensed bail agents (“bounty hunters”) work under state criminal law and have no authority over immigration arrests. What we’re seeing is private contractors—like G4S (now part of Allied Universal)—originally hired for transport or surveillance, but now often physically detaining people in public spaces. ICE’s own internal messaging has warned that these contractors were performing “arrest-like activities”, entering real legal grey areas. (Solano v. ICE complaint (Feb 2021)) Legal Grey Zone: Why This May Be Flat‑Out Illegal These aren’t just bad optics—they may break the law:
How Did We Get Here? Starting around 2016, ICE began outsourcing detainee transport to private firms like G4S—especially in states with sanctuary policies. Contractors now sometimes arrest people days after jail release, without ICE agents visibly present. (AP News on ICE contracts and detention surge ) Legal Battles You Should Know About Courts are pushing back—and slowly defining the limits:
Any reforms won through these cases are geographically limited, and practices continue nationwide. . Plainclothes & Ruse Tactics It's not just contractors—undercover ICE agents have started blending in during routine court and check-in operations. In May 2025, several plainclothes agents detained at least four asylum-seekers at San Francisco’s immigration court—wearing badges but using unmarked vehicles while accompanied by G4S personnel. (San Francisco Standard) These operations have been widely condemned as fear tactics that undermine due process. (Tennessee Courthouse Raid – Action5 News) Why We Should All Care When non-uniformed agents conduct high-impact detentions:
Why Nothing Has Changed
Solutions on the Table To restore trust and legality, we need:
What You Can Do If you witness a suspicious detention:
Support legal reforms like California’s AB 937 and urge your representatives to protect immigrant communities. Final Thought These are not random incidents—they’re part of a systematic shift toward outsourcing enforcement and operating in shadows. But if more individuals, lawyers, and communities speak up, push for transparency, and insist on constitutional integrity, we can shine a light on these practices—and curb them for good. Sources
One of the first things I learned in high school computer class—back when we sat behind Atari 800s and green-screen Commodore PETs—was the simple but enduring truth of “GIGO”: Garbage In, Garbage Out. Feed a computer bad code or unreliable data, and you’ll get nonsense—or worse—back. That concept has stuck with me throughout my career in technology consulting, and I’m reminded of it now more than ever, watching the rise of generative AI [1].
As the father of three college-age kids, I’ve also had a front-row seat to the rise of AI-generated content in education. In the past few years, I’ve seen my kids’ papers flagged for “AI plagiarism” by tools like Turnitin, even when they hadn’t used AI at all. False positives have gone up, not down. And they’re not alone: Stanford researchers have shown that most AI detection tools produce biased and unreliable results, often penalizing non-native English speakers [2]. But this issue goes far beyond academia. The same kinds of unreliable input-output loops are creeping into business AI systems that we trust to make decisions, generate insights, or interact with customers. It’s worth asking: what happens when AI systems become just as prone to garbage outputs—not because of human error, but because the AI platforms themselves are degrading over time? The Lifecycle of AI Decay A growing number of technologists and users have observed a pattern in digital platforms that Cory Doctorow colorfully calls enshittification—a process where systems initially serve users well, then shift to prioritize monetization and control, ultimately degrading the user experience [3]. While not all AI platforms are headed there, there are some that follow a recognizable pattern:
We’re already seeing proactive efforts from the major players. Salesforce has introduced its Trust Layer to control what goes in and out of LLMs in enterprise settings, aiming to preserve security and data lineage [4]. Anthropic is embedding legal and ethical boundaries into its models through Constitutional AI, a framework inspired by democratic safeguards [5]. Apple is reportedly building private, on-device AI systems with an emphasis on user control and data sovereignty [5]. And Google has taken steps to address bias and misinformation in its Gemini and Bard platforms, adding transparency tools and content verification methods [6]. On the flipside, changes to OpenAI’s GPT models have triggered concerns about data quality, transparency, and trustworthiness [7]. But the risk isn’t limited to big names. Smaller AI vendors—especially those packaging commercial LLMs behind minimal safeguards—are already facing scrutiny. Italy’s antitrust regulator has launched an investigation into DeepSeek, a lesser-known AI firm accused of misleading users with hallucinated results and inadequate disclosures [8]. As these examples show, reputational fallout comes fast when systems are rushed to market without enough rigor. What Can Go Wrong Already Has There are now real-world examples of business AI going sideways:
As Gartner notes, organizations that fail to validate AI outputs and establish clear accountability frameworks will see risk exposure increase sharply by 2026 [9]. What We Can Do About It This isn’t a call to panic. It’s a call to remember what we already know:
We should also support efforts to standardize ethical AI practices—whether that’s through internal governance, emerging industry certifications, or broader coalitions like the AI Incident Database [10]. A Final Thought AI isn’t magic. It’s code plus data plus business pressure. And unless we actively push back on the drift toward degraded, opaque, and lock-in-heavy models, we’ll see the same GIGO cycle play out—only this time with greater stakes. I think it’s worth reflecting on what we, as professionals, owe our organizations and ourselves when it comes to building trustworthy AI. It’s not just about technical excellence. It’s about remembering that clarity, honesty, and context matter—even if the tools we use speak in fluent paragraphs. Note! I’m not a policy maker or an AI ethicist—but I’ve worked long enough in consulting, systems integration, and data-driven programs to know what can go wrong. My next post will explore what professionals like us can actually do to protect our organizations—and our clients—from the risks we’re just beginning to understand. Sources Cited [1] AI – Artificial Intelligence – at Davos 2024: What to Know – World Economic Forum (January 14, 2024) https://www.weforum.org/stories/2024/01/artificial-intelligence-ai-innovation-technology-davos-2024/ [2] Turnitin admits frequent false positives when AI writing is below 20%. K12 Dive. June 7, 2023. https://www.k12dive.com/news/turnitin-false-positives-AI-detector/652221/ [3] What Is AI Enshittification? A New Term for a Growing Problem, VentureBeat https://venturebeat.com/ai/what-is-ai-enshittification-a-new-term-for-a-growing-problem [4] Inside the Einstein Trust Layer – Salesforce Developers Blog https://developer.salesforce.com/blogs/2023/10/inside-the-einstein-trust-layer [5] How Anthropic’s “Constitutional AI” teaches values through model feedback, Anthropic News (Dec 15, 2022) https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback [6] Responsible AI: Our 2024 report and ongoing work, Google Blog https://blog.google/technology/ai/responsible-ai-2024-report-ongoing-work [7] Is ChatGPT getting worse over time? Study claims yes—but others aren’t sure, Ars Technica https://arstechnica.com/information-technology/2023/07/is-chatgpt-getting-worse-over-time-study-claims-yes-but-others-arent-sure/ [8] IItaly regulator probes DeepSeek over false information risks, Reuters (June 16, 2025) https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/ [9] Gartner predicts 30% of generative AI projects will be abandoned after proof-of-concept due to poor data quality and lack of trust – AI Business, citing Gartner (August 1, 2024) https://aibusiness.com/automation/gartner-predicts-30-of-generative-ai-initiatives-will-fail-by-2025 [10] AI Incident Database – a shared space to report, track, and learn from real-world AI failures and harms https://incidentdatabase.ai When Chaos Becomes the Norm: Why I Wrote a White Paper on Executive Drift and Governance Breakdown6/10/2025 I didn’t set out to write a white paper. I set out to understand why I felt so damn uneasy. Maybe it started with seeing peaceful protestors met by armored vehicles. Maybe it was the endless chaos in Washington. Or maybe it was just me—an immigrant, veteran, and parent—wondering how much longer our institutions could bend before they break.
What began as frustration turned into research. What became research turned into structure. What emerged is now something I hope contributes meaningfully to the public record: a documented, reasoned critique of how executive power has drifted from constitutional constraint toward normalized chaos. 📄 Read the full white paper here! Salesforce’s acquisition of Informatica isn’t just another data platform merger — it’s a big strategic bet on fixing a problem GTM teams have faced for years: bad data kills good AI.
We’ve all seen the promise of AgentForce, Einstein Copilot, and Salesforce Data Cloud. But what happens when those tools rely on siloed, dirty, untrusted data? You don’t get insight — you get noise. Here’s what this acquisition really means:
Together, they offer a complete path from messy data to explainable AI. Why it matters to GTM, alliances, and vertical strategy teams:
💡 I’ve published a deeper white paper unpacking the GTM implications, integration gaps, industry angles, and risks — all from the perspective of someone who sells, supports and works closely within the partner ecosystem. 📄 Get it here Salesforce will define its roadmap in time — but as GTM leaders and partners, we have a chance to shape the conversation early. Let’s get ahead of the curve. |
AuthorAxel Newe is a strategic partnerships and GTM leader with a background in healthcare, SaaS, and digital transformation. He’s also a Navy veteran, cyclist, and lifelong problem solver. Lately, he’s been writing not just from the field and the road—but from the gut—on democracy, civic engagement, and current events (minus the rage memes). This blog is where clarity meets commentary, one honest post at a time. ArchivesCategories
All
|