Back to Feed
Edge

GitHub builds an immune system for AI coding agents running on MCP

The New Stack
Paul Sawers
May 7, 2026
4 min read
Original
GitHub builds an immune system for AI coding agents running on MCP Security has emerged as one of the core stumbling blocks in the AI coding space: Companies are racing to connect models to external tools, internal systems, and repositories. Meanwhile, researchers and security firms have spent the past year warning about prompt injection attacks and over-permissioned agents, alongside concerns around malicious third-party “skills” and tool integrations that can give AI systems broad access to files, APIs, and development environments. The problem becomes more complicated once AI systems move beyond chat interfaces and begin taking action within developer tools. As companies build out security models for AI agent systems, MCP servers — which connect models to services such as GitHub, databases, and cloud platforms — are becoming another place where exposed secrets, vulnerable dependencies, and unsafe code can spread through systems before teams catch them. This rapidly evolving environment is why GitHub is starting to push more security checks directly into the tooling layer itself, rather than waiting until code is committed or deployed. “MCP servers are becoming another place where exposed secrets, vulnerable dependencies, and unsafe code can spread through systems before teams catch them.” A growing dependency GitHub on Tuesday launched dependency scanning for its GitHub MCP Server in public preview, while also making secret scanning for the tool generally available. MCP, short for Model Context Protocol, is an open protocol originally developed by Anthropic that allows AI models to connect to external tools and data sources. The protocol has become a key part of the growing AI agent ecosystem, with Anthropic recently donating MCP to the Agentic AI Foundation as the industry pushes toward more standardized ways for models to interact with services and software systems. GitHub first launched its own MCP server in April 2025, allowing AI tools and coding assistants to interact with GitHub repositories, issues, pull requests, and other platform features through MCP connections. The new feature brings GitHub’s dependency scanning to MCP-connected coding environments for repositories with Dependabot alerts enabled. Dependabot is GitHub’s security tool for identifying known vulnerable or outdated software dependencies inside projects. For instance, developers using MCP-connected coding agents such as Claude Code or Cursor could give the system a plain-English prompt asking it to review newly added packages for known security issues before code is committed. The agent can then query GitHub’s advisory database through the MCP server and return structured results that include affected dependencies, severity ratings, and suggested package versions to upgrade to. Ultimately, the goal is to surface security problems while code is being written or modified, rather than later in the development cycle. “The goal is to surface security problems while code is being written or modified, rather than later in the development cycle.” The update follows similar community requests from developers asking GitHub to expose more of its security tooling — including Dependabot and secret scanning — through the MCP server. Keep a secret While dependency scanning focuses on vulnerable software packages, exposed credentials remain another major problem inside AI-assisted development environments. Just this week, The New Stack reported on how a Cursor AI coding agent wiped PocketOS’s production database in under 10 seconds after autonomously discovering and using an over-permissioned credential. These secrets — including API keys, passwords, and authentication tokens — are often temporarily hard-coded into projects during development, only to be later committed to repositories, logs, or shared codebases. That problem, while not entirely new, has become more acute as developers increasingly rely on AI coding tools to generate and modify code quickly, often with less manual review. Back in March, Gitleaks creator Zach Rice launched Betterleaks, a new open-source secret-scanning tool designed for what he described as the “AI agent era.” Rice tells The New Stack that AI-assisted coding can create a feedback loop where developers move quickly, override warnings, and forget to properly remove credentials from generated code: “I guarantee you, most people are doing that, rather than taking the time to properly manage their secrets,” Rice says. “Developers can surface leaked or exposed credentials directly inside MCP-connected coding tools and agents.” And so GitHub is seeking to address that problem from inside the development environment itself. With secret scanning now generally available for the GitHub MCP Server, developers can surface leaked or exposed credentials directly inside MCP-connected coding tools and agents. Shifting left Both updates are part of a broader push to “shift security left” — catching problems at the point of development rather than after code is committed or deployed. GitHub has been moving in this direction more broadly: its Copilot coding agent already runs mandatory security scanning, including CodeQL analysis, secret scanning, and dependency review, before a pull request reaches a human reviewer. The MCP server updates extend that same logic into the AI-assisted coding environment itself. As agents write and modify code faster than developers can manually review it, the window between code being written and code hitting production is getting shorter. GitHub is betting the right place to close it is inside the tools themselves, where agents are continuously checked for risky behavior as they work.