~ 4 min read
Bad Security Defaults in Mastra AI Frameworks Templates

In this explainer article I will discuss two critical vulnerabilities identified in the Mastra AI framework’s template repositories: an Improper Access Control vulnerability in the template-text-to-sql
code repository and a Server-Side Request Forgery (SSRF) vulnerability in the template-pdf-questions
code repository. These vulnerabilities pose significant risks to the security and integrity of applications built using these templates.
I will provide a detailed analysis of each vulnerability, demonstrate how they can be exploited, and offer recommendations for mitigation. This advisory aims to raise awareness among developers about the importance of securing AI-powered applications and to commend Mastra AI for their proactive response in addressing these issues.
Improper Access Control in Mastra AI template template-text-to-sql
The template-text-to-sql
project is designed to facilitate natural language to SQL conversion, featuring PostgreSQL schema analysis and AI-powered query generation. However, the project fails to enforce a true “read-only” mode, leaving it vulnerable to abuse and attacks on the PostgreSQL database server.
Vulnerability
The vulnerability lies in the sqlExecutionTool
function, which attempts to enforce a “read-only” mode by checking if the query string starts with “SELECT”. This naive approach is insufficient, as it does not account for SQL queries that can cause side effects through internal function calls.
export const sqlExecutionTool = createTool({ id: 'sql-execution', inputSchema: z.object({ connectionString: z.string().describe('PostgreSQL connection string'), query: z.string().describe('SQL query to execute'), }), description: 'Executes SQL queries against a PostgreSQL database', execute: async ({ context: { connectionString, query } }) => { const client = createDatabaseConnection(connectionString);
try { console.log('🔌 Connecting to PostgreSQL for query execution...'); await client.connect(); console.log('✅ Connected to PostgreSQL for query execution');
const trimmedQuery = query.trim().toLowerCase(); if (!trimmedQuery.startsWith('select')) { throw new Error('Only SELECT queries are allowed for security reasons'); }
const result = await executeQuery(client, query);
Exploitation
Attackers can exploit this vulnerability by executing queries that appear to be “read-only” but actually perform harmful operations. For example:
- Stored Procedures:
SELECT some_function_that_updates_data();
- Administrative Operations:
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE ...;
To reproduce the issue:
- Simulate a long-running query:
SELECT pg_sleep(5 * 60)
- Use the AI agent interface to execute:
SELECT pid, usename, state, query FROM pg_stat_activity;
to retrieve the PID. - Terminate the query:
SELECT pg_terminate_backend(PID);
Impact
This vulnerability can lead to denial of service and unauthorized access to running queries, potentially leaking sensitive data.
Recommendations
- Avoid relying solely on “starts with” checks for query validation.
- Remove misleading “Secure” language from documentation.
- Implement fine-grained permissions on the database server.
CVE Details
- CWE: CWE-284: Improper Access Control
- CVSS: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:H
References
- GitHub Kanban MCP Server Command Injection
- iOS Simulator MCP Server Command Injection
- Node.js Secure Coding
SSRF Vulnerability in Mastra AI template template-pdf-questions
Vulnerability
The template-pdf-questions
project allows users to download PDFs from URLs, but fails to sanitize user input, leading to an SSRF vulnerability. The pdfFetcherTool
function takes a URL as input without proper validation.
export const pdfFetcherTool = createTool({ id: 'download-pdf-tool', description: 'Downloads a PDF from a URL, extracts text, and returns a comprehensive summary', inputSchema: z.object({ pdfUrl: z.string().describe('URL to the PDF file to download'), }), execute: async ({ context, mastra }) => { const { pdfUrl } = context;
console.log('📥 Downloading PDF from URL:', pdfUrl);
try { // Step 1: Download the PDF const response = await fetch(pdfUrl);
Exploitation
Attackers can exploit this by providing URLs pointing to internal network addresses, potentially accessing sensitive data.
Mitigations
- Validate URLs against private IP ranges using packages like
private-ip
oripaddr.js
. - Use tools like url-sheriff to ensure URL safety.
Conclusion
The security of AI-powered applications is paramount, and these vulnerabilities highlight the need for robust security practices. I commend Mastra AI for their swift action in addressing these issues and encourage developers to prioritize security in their projects. For more insights and updates, follow me on Twitter and explore my work on GitHub.