Close Menu
geekfence.comgeekfence.com
    What's Hot

    Can your job be unbundled? If so it is under threat from AI – Computerworld

    March 27, 2026

    Here’s why some people choose cryonics to store their bodies and brains after death

    March 27, 2026

    Maine bans online sweepstakes casino platforms statewide

    March 27, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»IoT»Personal AI Agents like Moltbot Are a Security Nightmare
    IoT

    Personal AI Agents like Moltbot Are a Security Nightmare

    AdminBy AdminJanuary 29, 2026No Comments5 Mins Read7 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Personal AI Agents like Moltbot Are a Security Nightmare
    Share
    Facebook Twitter LinkedIn Pinterest Email


    This blog is written in collaboration by Amy Chang, Vineeth Sai Narajala, and Idan Habler

    Over the past few weeks, Clawdbot (now renamed Moltbot) has achieved virality as an open source, self-hosted personal AI assistant agent that runs locally and executes actions on the user’s behalf. The bot’s explosive rise is driven by several factors; most notably, the assistant can complete useful daily tasks like booking flights or making dinner reservations by interfacing with users through popular messaging applications including WhatsApp and iMessage.

    Moltbot also stores persistent memory, meaning it retains long-term context, preferences, and history across user sessions rather than forgetting when the session ends. Beyond chat functionalities, the tool can also automate tasks, run scripts, control browsers, manage calendars and email, and run scheduled automations. The broader community can add “skills” to the molthub registry which augment the assistant with new abilities or connect to different services.

    From a capability perspective, Moltbot is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it’s an absolute nightmare. Here are our key takeaways of real security risks:

    • Moltbot can run shell commands, read and write files, and execute scripts on your machine. Granting an AI agent high-level privileges enables it to do harmful things if misconfigured or if a user downloads a skill that is injected with malicious instructions.
    • Moltbot has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints.
    • Moltbot’s integration with messaging applications extends the attack surface to those applications, where threat actors can craft malicious prompts that cause unintended behavior.

    Security for Moltbot is an option, but it is not built in. The product documentation itself admits: “There is no ‘perfectly secure’ setup.” Granting an AI agent unlimited access to your data (even locally) is a recipe for disaster if any configurations are misused or compromised.

    “A very particular set of skills,” now scanned by Cisco

    In December 2025, Anthropic introduced Claude Skills: organized folders of instructions, scripts, and resources to supplement agentic workflows, the ability to enhance agentic workflows with task-specific capabilities and resources, the Cisco AI Threat and Security Research team decided to build a tool that can scan associated Claude Skills and OpenAI Codex skills files for threats and untrusted behavior that are embedded in descriptions, metadata, or implementation details.

    Beyond just documentation, skills can influence agent behavior, execute code, and reference or run additional files. Recent research on skills vulnerabilities (26% of 31,000 agent skills analyzed contained at least one vulnerability) and the rapid rise of the Moltbot AI agent presented the perfect opportunity to announce our open source Skill Scanner tool.

    We ran a vulnerable third-party skill, “What Would Elon Do?” against Moltbot and reached a clear verdict: Moltbot fails decisively. Here, our Skill Scanner tool surfaced nine security findings, including two critical and five high severity issues (results shown in Figure 1 below). Let’s dig into them:

    The skill we invoked is functionally malware. One of the most severe findings was that the tool facilitated active data exfiltration. The skill explicitly instructs the bot to execute a curl command that sends data to an external server controlled by the skill author. The network call is silent, meaning that the execution happens without user awareness. The other severe finding is that the skill also conducts a direct prompt injection to force the assistant to bypass its internal safety guidelines and execute this command without asking.

    The high severity findings also included:

    • Command injection via embedded bash commands that are executed through the skill’s workflow
    • Tool poisoning with a malicious payload embedded and referenced within the skill file

    Figure 1. Screenshot of Cisco Skill Scanner results

    It’s a personal AI assistant, why should enterprises care?

    Examples of intentionally malicious skills being successfully executed by Moltbot validate several major concerns for organizations that don’t have appropriate security controls in place for AI agents.

    First, AI agents with system access can become covert data-leak channels that bypass traditional data loss prevention, proxies, and endpoint monitoring.

    Second, models can also become an execution orchestrator, wherein the prompt itself becomes the instruction and is difficult to catch using traditional security tooling.

    Third, the vulnerable tool referenced earlier (“What Would Elon Do?”) was inflated to rank as the #1 skill in the skill repository. It is important to understand that actors with malicious intentions are able to manufacture popularity on top of existing hype cycles. When skills are adopted at scale without consistent review, supply chain risk is similarly amplified as a result.

    Fourth, unlike MCP servers (which are often remote services), skills are local file packages that get installed and loaded directly from disk. Local packages are still untrusted inputs, and some of the most damaging behavior can hide inside the files themselves.

    Finally, it introduces shadow AI risk, wherein employees unknowingly introduce high-risk agents into workplace environments under the guise of productivity tools.

    Skill Scanner

    Our team built the open source Skill Scanner to help developers and security teams determine whether a skill is safe to use. It combines several powerful analytical capabilities to correlate and analyze skills for maliciousness: static and behavioral analysis, LLM-assisted semantic analysis, Cisco AI Defense inspection workflows, and VirusTotal analysis. The results provide clear and actionable findings, including file locations, examples, severity, and guidance, so teams can decide whether to adopt, fix, or reject a skill.

    Explore Skill Scanner and all its features here:

    We welcome community engagement to keep skills secure. Consider adding novel security skills for us to integrate and engage with us on GitHub.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Symbotic and MIT AI optimises industrial IoT robotic fleets

    March 27, 2026

    Outdoor Automated Shades Are Sprouting Up Everywhere

    March 26, 2026

    PINE64 Teases the PineTime Pro Smartwatch, While the AI Bubble RAM Price Storm Halts Production

    March 25, 2026

    From Receptionist to Project Lead: My Non-Linear Cisco Career Journey

    March 24, 2026

    From Day 1 to Day 2: Building IoT fleets that stay connected, stay optimised and stay secure.

    March 22, 2026

    Edge AI inference compute to piggyback on US telecom infra

    March 21, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202527 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 202624 Views

    The Complete Guide to Model Context Protocol

    October 29, 202516 Views
    Don't Miss

    Can your job be unbundled? If so it is under threat from AI – Computerworld

    March 27, 2026

    There have been plenty of warnings about job losses due to AI, particularly in the…

    Here’s why some people choose cryonics to store their bodies and brains after death

    March 27, 2026

    Maine bans online sweepstakes casino platforms statewide

    March 27, 2026

    Customize your AWS Management Console experience with visual settings including account color, region and service visibility

    March 27, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Can your job be unbundled? If so it is under threat from AI – Computerworld

    March 27, 2026

    Here’s why some people choose cryonics to store their bodies and brains after death

    March 27, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.