How the VMware Tanzu Greenplum MCP Server Puts puts security patterns into practice (Part 2: The enterprise implementation)
In Part 1 of this blog series, I laid out why database access is the hardest MCP problem and explored the security patterns every implementation needs to get right. I used gp-mcp-server, an open source prototype I built for Greenplum and PostgreSQL, to make those patterns concrete. (If you haven’t read it yet, start there.)
This article picks up where that one left off. The prototype proved the patterns. Now I want to show you what those patterns look like at enterprise scale.
The VMware Tanzu Greenplum MCP Server is a production-grade implementation designed for the Greenplum ecosystem. It takes the same security principles I explored in gp-mcp-server and delivers them with enterprise authentication, policy-driven access controls, and a purpose-built tool surface that fundamentally changes how AI agents interact with databases.
It’s the same threat model with the same non-negotiables, but in a completely different weight class.
The threats haven’t changed
Before we look at what the Greenplum MCP Server does differently, let’s revisit the attacks you’re facing. The threat model doesn’t care whether your implementation is a prototype or a product. These are the scenarios every database MCP server needs to survive.
The transaction wrapper bypass
Anthropic’s original PostgreSQL MCP server wrapped queries in a read-only transaction and rolled them back. Datadog Security Labs found that an attacker could send semicolon-delimited multi-statement queries like COMMIT; DROP SCHEMA public CASCADE;. The COMMIT exits the wrapper, and the DROP executes unprotected. (Part 1 covers this in detail.)
The Greenplum MCP Server enforces read-only mode by default and uses policy-based controls to restrict which SQL statement types are permitted per role. The DROP fails against the read-only database user at the connection level. The attack never reaches the database because the policy layer rejects it before the connection pool even sees it.
The prompt injection via untrusted content
In Part 1, I described what I call the Snap Conditions: three stones that have to be in the gauntlet for a prompt injection attack to succeed. The agent can reach private data (Access Stone). Untrusted content enters the agent’s context (Exposure Stone). The agent has a path to send data out (Exfiltration Stone). Remove any one of them and the snap fails.
The Greenplum MCP Server removes the stones systematically:
- Policy controls – The server’s policy.yaml defines allowed and denied SQL statement types per role, restricting what an agent can reach. This is the Access Stone, neutralized through declarative configuration rather than application code.
- PII redaction – The server’s architecture supports role-based column-level masking as the platform matures. Sensitive data never enters the LLM’s context. That’s the Exposure Stone.
- Credential scoping – OAuth 2.1 integration maps IDP roles directly to database users. A read-only role gets readonly_user credentials. An analyst gets analyst_user. The agent’s blast radius is constrained by identity, not by trust. That’s the Exfiltration Stone.
Hundreds of servers, zero authentication
Security scans in 2025 found hundreds of public MCP servers exposed to the internet with no authentication or encryption. These weren’t edge cases or hobby projects. They were production-grade servers with direct database access and no audit trail.
Authentication is a first-class feature of the Greenplum MCP Server, not an afterthought. Three auth modes: no-auth (development only), Basic Auth, and OAuth 2.1 with enterprise IDP integration via Keycloak. TLS and mTLS are fully supported. There is no configuration path that accidentally exposes a production database without credentials.
From patterns to product
My first blog post in this series established the non-negotiables: read-only connections, SQL validation, result constraints, PII redaction, credential isolation, and mandatory authentication. Those are the controls every database MCP server needs. Here’s how the Greenplum MCP Server delivers each one as a product feature rather than a custom build:
- Read-only connections – Enforced by default at the database level. The server ships read-only. You have to explicitly opt into write access, and the policy layer governs what write operations are permitted.
- SQL validation – Policy-based filtering defines which SQL statement types are allowed per role. The policy.yaml configuration gives database teams declarative control over what the agent can and cannot do.
- Result constraints – Built-in row limits, byte caps, and timeouts. Token-aware truncation ensures the LLM gets useful context without flooding the context window.
- PII redaction – Role-based column-level masking architecture. Sensitive data is handled before it reaches the LLM’s context.
- Credential isolation – OAuth 2.1 with PKCE and enterprise IDP integration via Keycloak. Database credentials never appear in an MCP payload. Identity flows through the auth layer, not through the transport.
- Authentication – Mandatory in production. Three modes cover the full spectrum from development to enterprise deployment.
Tool design at scale
In Part 1, I argued that tool design is a security decision. The temptation is to expose a single execute_query tool and let the LLM figure it out. That’s a security failure disguised as simplicity. In gp-mcp-server, I applied this principle with 12 focused tools.
The Greenplum MCP Server takes the same principle to a different scale, with more than 30 purpose-built tools organized around a deliberate workflow: Discover the Schema. Constrain the Query. Contain the Data. Instrument the System.
Discovery tools let the agent understand the schema before it writes SQL. Diagnostic tools return pre-computed, structured results instead of raw query output. The GPExtensions team reports that this cuts token usage by 90%–98%. That’s more than a cost win, because fewer tokens in the context window means less surface area for hallucination and less room for injected content to hide.
That decomposition is itself a security pattern. An agent with separate discovery, query, and diagnostic tools will naturally explore the schema before attempting ad hoc SQL. An agent with built-in analytics tools has less reason to craft complex queries that push against policy boundaries. Tool design is prompt engineering by another name, and most teams aren’t pulling that lever yet.
The most underappreciated security feature
The Greenplum MCP Server supports user-defined tools via configuration. Custom SQL-backed tools, defined at runtime, no code changes required.
Think about what that means. A database team can define precisely scoped, tools—specific queries against specific tables with specific parameters. They hand those to AI agent users instead of open-ended query access. The agent gets the capability it needs, and the database team retains full control over what SQL actually executes.
That’s the intersection of flexibility and governance that most MCP implementations are missing entirely. It shifts the security conversation from “What can we prevent?” to “What do we explicitly allow?” That’s a fundamentally stronger posture.
The assembled team
In Part 1, I said each security control alone is like a solo Avenger: capable, but vulnerable. Stack them together and you get the assembled team. The threat has to beat all of them simultaneously.
The Greenplum MCP Server is that assembled team. Read-only defaults, policy-based access controls, enterprise authentication, purpose-built tools that constrain agent behavior from the ground up, and user-defined tools that give database teams governance without sacrificing flexibility. Every control from Part 1’s non-negotiable list is a product feature, not a custom integration.
The broader MCP database space is still new and ranges from “interesting experiment” to “actively dangerous.” But the patterns exist, the spec is ready, and the implementations are open source and testable. The question isn’t whether AI agents will connect to your databases. It’s whether the security boundary will be in place when they do.
Discover the Schema. Constrain the Query. Contain the Data. Instrument the System.
That’s the blueprint. And now it’s a product.