Hi HN — I built APIsec MCP Audit, an open source tool to audit Model Context Protocol (MCP) configurations used by AI agents.

Developers are connecting Claude, Cursor, and other assistants to APIs, databases, and internal systems via MCP. These configs grant agents real permissions, often without security oversight.

MCP Audit scans MCP configs and surfaces:

- Exposed credentials (keys, tokens, database URLs) - What APIs or tools an agent can call - High-risk capabilities (shell access, filesystem access, unverified sources)

It can also export results as a CycloneDX AI-BOM for governance and compliance.

Two ways to try it:

- CLI: pip install mcp-audit - Web demo: https://apisec-inc.github.io/mcp-audit/

Repo: https://github.com/apisec-inc/mcp-audit

We're a security company (APIsec) and built this after repeatedly finding secrets and over-permissioned agent configs during assessments. Would appreciate feedback — especially on risk scoring heuristics and what additional signals would be useful.