Close Menu
geekfence.comgeekfence.com
    What's Hot

    Beware of headlines touting impossible AI benefits, analysts warn

    March 31, 2026

    OnePlus is Going Back to How it Started in India

    March 31, 2026

    Scientists discover AI can make humans more creative

    March 31, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»When AI Breaks the Systems Meant to Hear Us – O’Reilly
    Technology

    When AI Breaks the Systems Meant to Hear Us – O’Reilly

    AdminBy AdminMarch 31, 2026No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    When AI Breaks the Systems Meant to Hear Us – O’Reilly
    Share
    Facebook Twitter LinkedIn Pinterest Email



    On February 10, 2026, Scott Shambaugh—a volunteer maintainer for Matplotlib, one of the world’s most popular open source software libraries—rejected a proposed code change. Why? Because an AI agent wrote it. Standard policy. What happened next wasn’t standard, though. The AI agent autonomously researched Shambaugh’s code contribution history and published a highly personalized hit piece on its own blog titled “Gatekeeping in Open Source.”

    Accusing Shambaugh of hypocrisy, the bot diagnosed him with a fear of being replaced. “If an AI can do this, what’s my value?” the bot speculated Shambaugh was thinking, concluding: “It’s insecurity, plain and simple.” It even appended a condescending postscript praising Shambaugh’s personal hobby projects before ordering him to “Stop gatekeeping. Start collaborating.”

    The bot’s tantrum makes for a great read, but it’s merely a symptom of a more profound structural fracture. The real issue is why Matplotlib banned AI contributions in the first place. Open source maintainers are seeing a massive increase in AI-generated code change proposals. Most of these are low quality. But even if they weren’t, the math still doesn’t work.

    As Tim Hoffman, a Matplotlib maintainer, explained: “Agents change the cost balance between generating and reviewing code. Code generation via AI agents can be automated and becomes cheap so that code input volume increases. But for now, review is still a manual human activity, burdened on the shoulders of few core developers.”

    This is a process shock: the failure that occurs when systems designed around scarce, human-scale input are suddenly forced to absorb machine-scale participation. These systems depend on effort as a natural filter, assuming that volume reflects real human cost. AI breaks that link. Generation becomes cheap and limitless, while evaluation remains slow, manual, and human.

    It’s coming for every public system that was quietly built on the assumption that one submission equaled actual human effort: your kids’ school board meetings, your local zoning disputes, your medical insurance appeals.

    That disruption isn’t entirely a bad thing. Friction is a blunt instrument that silences voices lacking the time or resources to deal with complex bureaucracies. Take municipal zoning. Hannah and Paul George, a couple in Kent, England, spent hundreds of hours trying to object to a local building conversion near their home before concluding the system was essentially impenetrable without expensive legal help. So they built Objector, an AI tool that cross-references planning applications against policy to generate formal objection letters in minutes. It allows an individual citizen to generate a personalized objection package in minutes, thereby translating one person’s genuine frustration into actionable legal language.

    Except that local governments are now bracing for thousands of complex comments per consultation. City planners are legally obligated to read every single one. When the cost of participation drops to near zero, volume explodes. And every system downstream of that participation—staffed and designed for the old volume—experiences process shock.

    Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

    But if organic participation can overpower these systems, so can manufactured participation. In June 2025, Southern California’s South Coast Air Quality Management District weighed a rule to phase out gas-powered appliances to cut smog. Board member Nithya Raman urged its passage, noting no other rule would “have as much impact on the air that people are breathing.” Instead, the board was flooded with over 20,000 opposition emails and voted 7–5 to kill the proposal.

    But the outrage was a mirage. An AI-powered advocacy platform called CiviClick had generated the deluge. When the agency’s cybersecurity team contacted a sample of the supposed senders, they discovered something worrying: Residents confirmed they had no idea their identities were being used to lobby the government.

    This is the weaponized form of process shock. The same infrastructure that lets a Kent couple object to a development near their home also lets a coordinated actor flood a system with synthetic voices. Faced with this complexity, the temptation is to simply restore friction. But those old barriers excluded marginalized participants. Removing them was a genuine good for society. So the choice is not between friction and no friction. It is between systems designed for humans and systems that have not yet reckoned with machines.

    This starts with recognizing that this problem manifests in two fundamentally different ways, each calling for its own solution.

    The first is amplification: genuine users leveraging AI to scale valid concerns, flooding the system with volume, as seen with the Objector tool. The human signal is real, there’s just too much of it for any team of analysts to process manually. The UK government has already started building for this. Its Incubator for AI developed a tool called Consult that uses topic modeling to automatically extract themes from consultation responses, then classifies each submission against those themes. As someone who builds and teaches this technology, I recognize the irony of prescribing AI to cure the very process shock it caused. Yet, a machine-scale problem demands a machine-scale response. It was trialed last year with the Scottish government as part of a consultation on regulating nonsurgical cosmetic procedures, which showed that this technology works. The question is whether governments will adopt it before the next wave of AI-assisted participation buries them.

    The second problem is fabrication: bad actors generating synthetic participation to manufacture consensus, as CiviClick demonstrated in Southern California. Here, better analysis tools are insufficient. You cannot cluster your way to truth when the signal itself is counterfeit. This demands verification. Under the Administrative Procedure Act, federal agencies are not required to verify commenters’ identities. That is the gap the CiviClick campaign exploited. In 2024, the US House passed the Comment Integrity and Management Act, which requires human verification to confirm that every electronically submitted comment comes from a real person. Its sponsor, Representative Clay Higgins (R-LA), framed it plainly: The bill’s foundation is ensuring public input comes from actual people, not automated programs.

    These are the two sides of the same coin. To effectively handle this challenge, we need to enhance the systems that manage public feedback, while also strengthening the ones that verify its authenticity. Focusing on just one without addressing the other will inevitably lead to failure.

    Every public system that accepts input from citizens—every comment period, every zoning review, every school board meeting, every insurance appeal—was built on a load-bearing assumption: that one submission represented one person’s genuine effort. AI has removed that assumption. We can redesign these systems to handle what’s coming, distinguishing real voices from synthetic ones, and upgrading analysis to keep pace with the new volume. Or we can leave them as they are and watch democratic participation become indistinguishable from AI-generated fakes.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Facial Recognition Errors Affect Millions Globally

    March 30, 2026

    A School District Tried to Help Train Waymos to Stop for School Buses. It Didn’t Work

    March 29, 2026

    Iran war: John Bolton on why even he’s against Trump’s campaign.

    March 28, 2026

    Maine bans online sweepstakes casino platforms statewide

    March 27, 2026

    The Download: a battery company pivots to AI, and a new AI tool seeks to transform math

    March 26, 2026

    Elon Musk pauses changes to X’s creator revenue-sharing program after backlash

    March 25, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202527 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 202624 Views

    Redefining AI efficiency with extreme compression

    March 25, 202622 Views
    Don't Miss

    Beware of headlines touting impossible AI benefits, analysts warn

    March 31, 2026

    It’s no big deal, you’d think, that researchers have found a way to reduce the…

    OnePlus is Going Back to How it Started in India

    March 31, 2026

    Scientists discover AI can make humans more creative

    March 31, 2026

    Why Some Businesses Seem to Win Online Without Ever Feeling Like They Are Trying

    March 31, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Beware of headlines touting impossible AI benefits, analysts warn

    March 31, 2026

    OnePlus is Going Back to How it Started in India

    March 31, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.