Close Menu
geekfence.comgeekfence.com
    What's Hot

    Enterprise Network Trends & Strategy: WAN Manager Survey Insights

    March 28, 2026

    Posit AI Blog: De-noising Diffusion with torch

    March 28, 2026

    The Path to Agentic-Ready Data: Takeaways from the Gartner Data & Analytics Summit

    March 28, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Software Development»Shadow AI : How to deal with unauthorized models and uncontrolled agents
    Software Development

    Shadow AI : How to deal with unauthorized models and uncontrolled agents

    AdminBy AdminMarch 28, 2026No Comments5 Mins Read1 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Shadow AI : How to deal with unauthorized models and uncontrolled agents
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Shadow AI is considered the next iteration of Shadow IT,  with the big difference being that while developers might use a self-contained, unauthorized tool in their work, the tool itself does not create risk.

    Shadow AI is particularly troublesome because an unauthorized model can gain access to databases it shouldn’t have and lack the system and organizational context to make correct decisions. Further, Shadow AI almost always involves someone in the organization taking company intellectual property and pasting it into a public tool, leaving the destination and subsequent processing unknown.

    Part of the problem, according to Broadcom Head of Product Management, Clarity, Brian Nathanson, is an organization’s approach to governance and security exactly because AI is advancing so quickly and continually changing. The engineers feel that the governance is burdensome to get their work done, and that their organizations’ governance is too slow to bring different models on board. “Individuals are seeing the productivity benefit of AI for more than the enterprise does, at least right now, but enterprises, because of the concerns over liability and their IP protection, have basically tried to clamp down,” Nathanson said. “They’ve said, no you can’t use AI tools, or you can only use these authorized AI tools.”

    Nathanson said that puts developers into a bind, because if the company only authorizes, say, Gemini, and the developer knows that Claude might give better responses for a certain activity, the developer thinks “I’ll just copy and paste into my private, personal account of Claude, and they say, ‘I’m just going to use it, because I can’t wait for the governance process to authorize the AI tools.’ ” 

    Ted Way, vice president and chief product officer at SAP, said employees “just want to get stuff done,” and most of the time will ask for forgiveness later. But that’s not worth the risk of sensitive data being leaked, “and not only is it being leaked, but it’s stored and processed outside your company. It might be used to train a model. And then you have your compliance risk,” he said. “And, in the journey to get stuff done, are you actually not even doing it,” because you might not be getting the accurate results you want.

    What organizations can do

    Getting the shadow AI issue under control involves organizational governance, policy and culture.

    Some companies, instead of restricting Ai, have created orchestration layers that allow engineers to use many different open source and proprietary models in a way that is controlled by the orchestration. This reduces the need for engineers to go outside of the company’s policies to get their work done with the model they choose, and thus reduces risk of a company’s proprietary data and conversations aren’t let out into the public.

    From a policy perspective, Way said that it starts with a clear view of policy on generative AI. He explained that modern technology forces a trade-off: organizations can only achieve two out of three desired outcomes—safe, capable, and autonomous.

    • Safe and Capable: This state requires extensive “human babysitting” and is considered to be  too slow, as every request is “gated on humans.”
    • Capable and Autonomous: This represents the opposite extreme—a lack of oversight where the LLM decides what is safe. Way cites an example of an LLM deciding to decrypt repository answers to achieve a better score on an evaluation.
    • Safe and Autonomous: This state is too restricted, meaning the system will not have access to the necessary tools to be capable.

     Addressing Shadow AI requires moving past ineffective governance models. Michael Burch, director of application security at Security Journey, suggests that while an AI team or governance committee should exist, governance is not just a “10-page policy report that nobody’s gonna read.” Instead, it must be about “everyday-to-day practical governance—taking that 10-page report and making it actionable for individuals.” 

    Governance, he said, “isn’t just about the policy publications and writing all the rules and buying the right tools. It’s, is all the work we put in, is it actionable? Did it actually have an impact? And did we give it to people in a way that let them actually do it day-to-day and improve the way they’re thinking and treating security?”  Any governance effort must be “grounded in real truth of day-to-day workflows,” he said, to ensure people will actually adopt it. The ultimate goal is a practical system that drives adoption and gets people to hold themselves accountable for how they use AI. Burch noted that governance fails when policies alone are relied upon to create good decisions. 

    A vital step in this practical approach is building a security culture. This involves teams having a shared vocabulary, workflow guidance, and examples. If everyone understands how AI integrates into their workflows and speaks the same language, the potential for failure is significantly reduced. 

    “If we’re all talking the same language, if we all understand how AI integrates in our different workflows, and we have examples to work from so we understand how to… the lift to get there is a lot smaller for us, we have a lot less chance for failure, because everybody’s kind of on that same page,” Burch explained.

     



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    What Is CI & CD? Understanding Continuous Integration and Delivery Pipelines

    March 24, 2026

    Why Modernizing Your Data Architecture Means More Than Just Moving Your Data

    March 23, 2026

    Strategies for Modernizing Legacy Systems

    March 19, 2026

    Harness Launches Two Major Initiatives to Secure the Future of AI-Powered Software Delivery

    March 18, 2026

    Credit Scoring Software Development: Ultimate Guide for FinTech

    March 14, 2026

    The $1.6 Million Weekend: Why Simple API Gateways Fail in the Agentic Era

    March 13, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202527 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 202624 Views

    Redefining AI efficiency with extreme compression

    March 25, 202618 Views
    Don't Miss

    Enterprise Network Trends & Strategy: WAN Manager Survey Insights

    March 28, 2026

    Since 2018, we’ve been tracking the evolving nature of corporate enterprise networks through our WAN…

    Posit AI Blog: De-noising Diffusion with torch

    March 28, 2026

    The Path to Agentic-Ready Data: Takeaways from the Gartner Data & Analytics Summit

    March 28, 2026

    Iran war: John Bolton on why even he’s against Trump’s campaign.

    March 28, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Enterprise Network Trends & Strategy: WAN Manager Survey Insights

    March 28, 2026

    Posit AI Blog: De-noising Diffusion with torch

    March 28, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.