The 7 Deadly Sins of AI-Generated Code
After auditing dozens of vibe-coded applications, I've found that the same vulnerabilities appear over and over. I call them the 7 deadly sins because they show up with almost religious consistency.
This is the most common issue, appearing in nearly every AI-generated codebase I review. The AI builds endpoints that trust user input completely.
Vulnerable (what the AI generates):
// Express route - AI-generated
app.post('/api/users', async (req, res) => {
const { name, email, role } = req.body;
const user = await db.user.create({
data: { name, email, role }
});
res.json(user);
});
See the problem? The role field comes straight from the request body. Any user can set themselves as admin. There's no validation on email format, no length limits on name, nothing.
Secure (what it should be):
import { z } from 'zod';
const CreateUserSchema = z.object({
name: z.string().min(1).max(100).trim(),
email: z.string().email().max(255).toLowerCase(),
// role is NOT accepted from user input
});
app.post('/api/users', async (req, res) => {
const result = CreateUserSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({
error: 'Validation failed',
details: result.error.issues
});
}
const user = await db.user.create({
data: {
...result.data,
role: 'user', // hardcoded default, never from input
}
});
res.json(user);
});
The fix isn't complicated — Zod schema, explicit allowlist of fields, role set server-side. But the AI almost never generates this pattern unless you specifically ask for validation.
Sin 2: Hardcoded Secrets and API Keys
AI models have seen thousands of tutorials and Stack Overflow answers with hardcoded credentials. Guess what they reproduce.
Vulnerable:
// AI loves putting secrets right in the code
const stripe = new Stripe('sk_live_abc123def456ghi789');
const dbConnection = mysql.createConnection({
host: 'production-db.amazonaws.com',
user: 'admin',
password: 'SuperSecret123!',
database: 'production'
});
// Or slightly better but still wrong — .env committed to git
// with no .gitignore entry
Secure:
import { z } from 'zod';
// Validate env vars exist at startup, fail fast
const envSchema = z.object({
STRIPE_SECRET_KEY: z.string().startsWith('sk_'),
DATABASE_URL: z.string().url(),
JWT_SECRET: z.string().min(32),
});
const env = envSchema.parse(process.env);
const stripe = new Stripe(env.STRIPE_SECRET_KEY);
And your .gitignore must include .env*. Always. Verify this exists after any AI scaffolding session. I've seen AI tools generate .gitignore files that include node_modules but miss .env.
Sin 3: Missing Authentication Checks
This is the one that gave me access to that startup's database. AI tools generate routes, and sometimes they add auth middleware, sometimes they don't. There's no consistency.
Vulnerable:
// AI generated all CRUD routes but forgot auth on some
app.get('/api/users', authMiddleware, getUsers); // protected
app.get('/api/users/:id', authMiddleware, getUser); // protected
app.put('/api/users/:id', updateUser); // OOPS - no auth
app.delete('/api/users/:id', deleteUser); // OOPS - no auth
// Even worse: admin routes with client-side-only protection
app.get('/api/admin/analytics', getAnalytics); // no auth at all
// The AI put the protection in the React component instead:
// if (user.isAdmin) { <AdminDashboard /> }
Secure:
// Default-deny: apply auth globally, whitelist public routes
const publicRoutes = ['/api/auth/login', '/api/auth/register', '/api/health'];
app.use((req, res, next) => {
if (publicRoutes.includes(req.path)) return next();
return authMiddleware(req, res, next);
});
// Role-based middleware for admin routes
const requireRole = (role: string) => (req, res, next) => {
if (req.user?.role !== role) {
return res.status(403).json({ error: 'Forbidden' });
}
next();
};
app.get('/api/admin/analytics', requireRole('admin'), getAnalytics);
The principle is default-deny. Everything requires authentication unless explicitly whitelisted. AI generates default-allow because it builds routes one at a time.
Sin 4: SQL Injection and NoSQL Injection
You'd think in 2026 we'd be past SQL injection. AI tools bring it back with enthusiasm.
Vulnerable:
// AI-generated search endpoint
app.get('/api/search', async (req, res) => {
const { query } = req.query;
const results = await db.$queryRawUnsafe(
`SELECT * FROM products WHERE name LIKE '%${query}%'`
);
res.json(results);
});
// NoSQL variant with MongoDB
app.post('/api/login', async (req, res) => {
const user = await User.findOne({
email: req.body.email,
password: req.body.password // also: plain text password comparison
});
});
That MongoDB query is vulnerable to NoSQL injection. Send {"password": {"$ne": ""}} and you're in.
Secure:
// Parameterized query
app.get('/api/search', async (req, res) => {
const query = String(req.query.query || '').slice(0, 100);
const results = await db.product.findMany({
where: {
name: { contains: query, mode: 'insensitive' }
},
take: 50,
});
res.json(results);
});
// Proper auth with hashed passwords
app.post('/api/login', async (req, res) => {
const { email, password } = req.body;
const user = await User.findOne({ email });
if (!user || !(await bcrypt.compare(password, user.passwordHash))) {
return res.status(401).json({ error: 'Invalid credentials' });
}
// issue JWT / session
});
Always use your ORM's query builder. Never use $queryRawUnsafe or string concatenation. If your AI generates raw SQL strings, that's a red flag for the entire codebase.
Sin 5: No CSRF Protection
In the audit of five major AI coding tools, zero of them added CSRF protection to any of the 15 generated apps. Zero.
Vulnerable:
// AI sets up Express with JSON body parser and calls it done
app.use(express.json());
app.use(cors({ origin: '*' })); // bonus vulnerability: wildcard CORS
// No CSRF tokens anywhere. Every state-changing request
// can be triggered from any website.
Secure:
import csrf from 'csurf';
import helmet from 'helmet';
// Restrictive CORS
app.use(cors({
origin: process.env.ALLOWED_ORIGIN, // e.g., 'https://myapp.com'
credentials: true,
}));
// CSRF protection for non-API routes
const csrfProtection = csrf({ cookie: true });
app.use('/api', csrfProtection);
// For SPA/API architectures, use SameSite cookies + custom headers
app.use((req, res, next) => {
if (['POST', 'PUT', 'DELETE', 'PATCH'].includes(req.method)) {
const csrfHeader = req.headers['x-requested-with'];
if (csrfHeader !== 'XMLHttpRequest') {
return res.status(403).json({ error: 'CSRF validation failed' });
}
}
next();
});
For modern SPAs using fetch with JSON, the combination of SameSite=Strict cookies, restrictive CORS, and a custom request header provides strong CSRF protection. But the AI doesn't add any of these layers.
Sin 6: Insecure Direct Object References (IDOR)
This is subtle and the AI almost never handles it correctly. When a user requests /api/invoices/123, does the code verify that invoice 123 belongs to them?
Vulnerable:
// AI generates clean CRUD but no ownership checks
app.get('/api/invoices/:id', authMiddleware, async (req, res) => {
const invoice = await db.invoice.findUnique({
where: { id: req.params.id }
});
if (!invoice) return res.status(404).json({ error: 'Not found' });
res.json(invoice);
});
// Any authenticated user can read ANY invoice by guessing IDs
Secure:
app.get('/api/invoices/:id', authMiddleware, async (req, res) => {
const invoice = await db.invoice.findUnique({
where: {
id: req.params.id,
userId: req.user.id, // ownership check
}
});
if (!invoice) return res.status(404).json({ error: 'Not found' });
res.json(invoice);
});
// Even better: use a middleware pattern
const requireOwnership = (model: string, foreignKey = 'userId') =>
async (req, res, next) => {
const record = await db[model].findUnique({
where: { id: req.params.id }
});
if (!record || record[foreignKey] !== req.user.id) {
return res.status(404).json({ error: 'Not found' });
}
req.resource = record;
next();
};
app.get('/api/invoices/:id', authMiddleware, requireOwnership('invoice'), handler);
Notice that in the secure version, we return 404 instead of 403 for records that belong to other users. This prevents enumeration — an attacker can't distinguish "doesn't exist" from "exists but not yours."
Every browser has built-in security mechanisms. They just need to be activated via HTTP headers. AI tools never set them.
Vulnerable:
// AI generates a Next.js app with zero security headers
// next.config.js is either empty or only has redirects
module.exports = {
reactStrictMode: true,
};
Secure:
// next.config.js
const securityHeaders = [
{ key: 'X-Content-Type-Options', value: 'nosniff' },
{ key: 'X-Frame-Options', value: 'DENY' },
{ key: 'X-XSS-Protection', value: '1; mode=block' },
{ key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
{ key: 'Permissions-Policy', value: 'camera=(), microphone=(), geolocation=()' },
{
key: 'Content-Security-Policy',
value: "default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com;"
},
{
key: 'Strict-Transport-Security',
value: 'max-age=63072000; includeSubDomains; preload'
},
];
module.exports = {
reactStrictMode: true,
async headers() {
return [{ source: '/(.*)', headers: securityHeaders }];
},
};
For Express apps, just use helmet(). One line. But AI tools don't add it unless prompted, and even then they sometimes import it without calling it.