Building Scalable Microservices with Node.js and AWS
Learn how to architect and deploy microservices that can handle millions of requests with AWS Lambda and API Gateway.

Building Scalable Microservices with Node.js and AWS
Hey there, fellow developer! š
Ever felt overwhelmed managing a monolithic application that's grown into an unmaintainable beast? You're not alone. Let me share how microservices architecture with Node.js and AWS can transform your development workflow and help you build applications that scale effortlessly.
Why Should You Care About Microservices?
Think of microservices like a well-organized toolbox. Instead of one massive Swiss Army knife trying to do everything, you have specialized tools for specific jobs. Here's what that means for you:
The Real Benefits
- Scalability: Imagine Black Friday hits, and only your checkout service needs more power. Scale just that service, not your entire application. Save money and sleep better! š°
- Flexibility: Your payment service needs Python for ML-based fraud detection? Your user service works great in Node.js? No problem! Mix and match technologies that fit each job perfectly.
- Resilience: When (not if) something breaks, it won't take down your entire application. Your image processing service crashes? Your users can still browse products and add items to cart.
- Team Autonomy: Different teams can own different services, deploy independently, and move faster without stepping on each other's toes.
- Easier Testing & Debugging: Smaller codebases mean you can actually understand what's happening when bugs appear at 2 AM.
When NOT to Use Microservices
Let's be real - microservices aren't always the answer. Skip them if:
- You're building an MVP or small application
- Your team is fewer than 3-4 developers
- You don't have experience with distributed systems
- Your application domain is simple and unlikely to grow
According to serverless.com, starting simple and evolving your architecture as complexity grows is often the smartest approach.
Architecture Overview: The Big Picture
Let me walk you through a production-ready architecture that I've used to handle millions of requests:
Core Components
-
API Gateway: Your application's front door. It handles:
- Request routing
- Rate limiting
- API key management
- Request/response transformation
- CORS handling
-
Lambda Functions: The workhorses of your microservices. Think of them as tiny, independent servers that:
- Only run when needed (pay only for what you use!)
- Scale automatically from zero to thousands of concurrent executions
- Require zero server management
-
DynamoDB: Your NoSQL powerhouse for:
- Sub-10ms response times
- Automatic scaling
- Built-in backup and point-in-time recovery
- Perfect for user profiles, session data, and product catalogs
-
S3: Not just for storage! Use it for:
- User uploads (images, documents)
- Static website hosting
- Data lake for analytics
- Lambda deployment packages
-
SQS: Your safety net for async operations:
- Decouple services
- Handle traffic spikes gracefully
- Retry failed operations automatically
- Process orders, send emails, generate reports
-
EventBridge: The nervous system connecting everything (bonus!)
- Service-to-service communication
- Schedule regular tasks
- React to AWS resource changes
Real-World Implementation
Let's build something practical - a user registration microservice that actually handles the messy real-world scenarios.
Setting Up Your Lambda Function (The Right Way)
// handler.js
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, PutCommand } from '@aws-sdk/lib-dynamodb';
import { SQSClient, SendMessageCommand } from '@aws-sdk/client-sqs';
import { randomUUID } from 'crypto';
// Initialize clients outside handler for connection reuse
const dynamoClient = new DynamoDBClient({});
const docClient = DynamoDBDocumentClient.from(dynamoClient);
const sqsClient = new SQSClient({});
// Custom error class for better error handling
class ValidationError extends Error {
constructor(message) {
super(message);
this.name = 'ValidationError';
this.statusCode = 400;
}
}
// Input validation helper
const validateUser = (data) => {
const { email, name } = data;
if (!email || !email.match(/^[^\s@]+@[^\s@]+\.[^\s@]+$/)) {
throw new ValidationError('Invalid email address');
}
if (!name || name.length < 2) {
throw new ValidationError('Name must be at least 2 characters');
}
return true;
};
export const handler = async (event) => {
// Add correlation ID for tracing requests across services
const correlationId = event.headers?.['x-correlation-id'] || randomUUID();
console.log('Processing request', {
correlationId,
path: event.path,
method: event.httpMethod
});
try {
// Parse and validate input
const body = JSON.parse(event.body || '{}');
validateUser(body);
const userId = randomUUID();
const timestamp = new Date().toISOString();
const user = {
userId,
email: body.email.toLowerCase(),
name: body.name,
createdAt: timestamp,
updatedAt: timestamp,
status: 'PENDING_VERIFICATION'
};
// Save to DynamoDB
await docClient.send(new PutCommand({
TableName: process.env.USERS_TABLE,
Item: user,
// Prevent overwriting existing users
ConditionExpression: 'attribute_not_exists(userId)'
}));
// Send welcome email asynchronously via SQS
await sqsClient.send(new SendMessageCommand({
QueueUrl: process.env.EMAIL_QUEUE_URL,
MessageBody: JSON.stringify({
type: 'WELCOME_EMAIL',
userId,
email: user.email,
name: user.name
}),
MessageAttributes: {
correlationId: {
DataType: 'String',
StringValue: correlationId
}
}
}));
console.log('User created successfully', { userId, correlationId });
return {
statusCode: 201,
headers: {
'Content-Type': 'application/json',
'X-Correlation-ID': correlationId,
// CORS headers
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: JSON.stringify({
success: true,
data: {
userId,
email: user.email,
name: user.name
}
})
};
} catch (error) {
console.error('Error processing request', {
error: error.message,
stack: error.stack,
correlationId
});
// Handle different error types appropriately
if (error instanceof ValidationError) {
return {
statusCode: error.statusCode,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
success: false,
error: error.message
})
};
}
// DynamoDB conditional check failed - user exists
if (error.name === 'ConditionalCheckFailedException') {
return {
statusCode: 409,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
success: false,
error: 'User already exists'
})
};
}
// Generic server error
return {
statusCode: 500,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
success: false,
error: 'Internal server error',
// Only include correlation ID in error response
correlationId
})
};
}
};
Serverless Framework Configuration
As recommended by AWS best practices, use Infrastructure as Code:
# serverless.yml
service: user-service
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs20.x
region: us-east-1
stage: ${opt:stage, 'dev'}
# Environment variables
environment:
USERS_TABLE: ${self:service}-users-${self:provider.stage}
EMAIL_QUEUE_URL: !Ref EmailQueue
NODE_ENV: ${self:provider.stage}
# IAM permissions
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:GetItem
- dynamodb:UpdateItem
- dynamodb:Query
Resource: !GetAtt UsersTable.Arn
- Effect: Allow
Action:
- sqs:SendMessage
Resource: !GetAtt EmailQueue.Arn
# Logging
logs:
restApi: true
# API Gateway configuration
apiGateway:
shouldStartNameWithService: true
metrics: true
functions:
createUser:
handler: handler.handler
events:
- http:
path: users
method: post
cors: true
# Request validation
request:
schemas:
application/json: ${file(schemas/user-create.json)}
# Lambda configuration
memorySize: 512
timeout: 10
reservedConcurrency: 100
# Deployment configuration
package:
patterns:
- '!tests/**'
- '!.git/**'
resources:
Resources:
# DynamoDB Table
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:provider.environment.USERS_TABLE}
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
GlobalSecondaryIndexes:
- IndexName: EmailIndex
KeySchema:
- AttributeName: email
KeyType: HASH
Projection:
ProjectionType: ALL
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
# SQS Queue
EmailQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:service}-email-queue-${self:provider.stage}
VisibilityTimeout: 300
MessageRetentionPeriod: 1209600 # 14 days
RedrivePolicy:
deadLetterTargetArn: !GetAtt EmailDLQ.Arn
maxReceiveCount: 3
# Dead Letter Queue
EmailDLQ:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:service}-email-dlq-${self:provider.stage}
MessageRetentionPeriod: 1209600
plugins:
- serverless-offline
- serverless-plugin-typescript
Production-Ready Best Practices
1. Environment Configuration
Never, ever hardcode secrets! Here's the right way according to Medium:
// config.js
export const config = {
region: process.env.AWS_REGION || 'us-east-1',
usersTable: process.env.USERS_TABLE,
jwtSecret: process.env.JWT_SECRET, // Stored in AWS Secrets Manager
corsOrigin: process.env.CORS_ORIGIN || '*',
logLevel: process.env.LOG_LEVEL || 'info'
};
// Validate required environment variables on cold start
const requiredEnvVars = ['USERS_TABLE', 'JWT_SECRET'];
requiredEnvVars.forEach(varName => {
if (!process.env[varName]) {
throw new Error(`Missing required environment variable: ${varName}`);
}
});
2. Comprehensive Logging & Monitoring
// logger.js
import winston from 'winston';
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: {
service: 'user-service',
environment: process.env.NODE_ENV
},
transports: [
new winston.transports.Console()
]
});
export default logger;
3. API Versioning Strategy
// v1/handler.js
export const handler = async (event) => {
// Version 1 logic
};
// v2/handler.js
export const handler = async (event) => {
// Version 2 with breaking changes
};
Configure in serverless.yml:
functions:
createUserV1:
handler: v1/handler.handler
events:
- http:
path: v1/users
method: post
createUserV2:
handler: v2/handler.handler
events:
- http:
path: v2/users
method: post
4. Authentication & Authorization
// authorizer.js
import jwt from 'jsonwebtoken';
export const handler = async (event) => {
try {
const token = event.authorizationToken.replace('Bearer ', '');
const decoded = jwt.verify(token, process.env.JWT_SECRET);
return {
principalId: decoded.userId,
policyDocument: {
Version: '2012-10-17',
Statement: [{
Action: 'execute-api:Invoke',
Effect: 'Allow',
Resource: event.methodArn
}]
},
context: {
userId: decoded.userId,
email: decoded.email
}
};
} catch (error) {
throw new Error('Unauthorized');
}
};
5. Health Checks & Circuit Breakers
// health.js
export const handler = async () => {
const checks = {
dynamodb: await checkDynamoDB(),
sqs: await checkSQS(),
timestamp: new Date().toISOString()
};
const healthy = Object.values(checks).every(check =>
typeof check === 'object' ? check.status === 'healthy' : true
);
return {
statusCode: healthy ? 200 : 503,
body: JSON.stringify(checks)
};
};
Performance Optimization Tips
Based on insights from Simple AWS:
1. Lambda Cold Start Optimization
// Initialize connections outside handler
const dynamoClient = new DynamoDBClient({
// Reuse HTTP connections
maxAttempts: 3,
requestHandler: {
connectionTimeout: 3000,
socketTimeout: 3000
}
});
// Minimize package size
// Use esbuild or webpack to bundle only what you need
2. DynamoDB Best Practices
// Use batch operations when possible
const batchWrite = async (items) => {
const chunks = chunkArray(items, 25); // DynamoDB limit
await Promise.all(
chunks.map(chunk =>
docClient.send(new BatchWriteCommand({
RequestItems: {
[tableName]: chunk.map(item => ({
PutRequest: { Item: item }
}))
}
}))
)
);
};
// Use projections to fetch only needed attributes
await docClient.send(new GetCommand({
TableName: tableName,
Key: { userId },
ProjectionExpression: 'email, #name, createdAt',
ExpressionAttributeNames: { '#name': 'name' }
}));
3. Caching Strategy
// Use ElastiCache or DynamoDB DAX for frequent reads
import { createClient } from 'redis';
const redis = createClient({
url: process.env.REDIS_URL,
socket: {
connectTimeout: 3000
}
});
const getUserWithCache = async (userId) => {
const cached = await redis.get(`user:${userId}`);
if (cached) return JSON.parse(cached);
const user = await getUser FromDB(userId);
await redis.setEx(`user:${userId}`, 3600, JSON.stringify(user));
return user;
};
Testing Your Microservices
// handler.test.js
import { handler } from './handler';
describe('User Creation', () => {
it('should create user successfully', async () => {
const event = {
body: JSON.stringify({
email: 'test@example.com',
name: 'Test User'
}),
headers: {}
};
const response = await handler(event);
expect(response.statusCode).toBe(201);
const body = JSON.parse(response.body);
expect(body.success).toBe(true);
expect(body.data.email).toBe('test@example.com');
});
it('should reject invalid email', async () => {
const event = {
body: JSON.stringify({
email: 'invalid-email',
name: 'Test User'
}),
headers: {}
};
const response = await handler(event);
expect(response.statusCode).toBe(400);
});
});
Deployment Strategy
Following serverless deployment best practices:
# Install dependencies
npm install
# Run tests
npm test
# Deploy to development
serverless deploy --stage dev
# Run integration tests
npm run test:integration
# Deploy to production
serverless deploy --stage prod
# Monitor deployment
serverless logs -f createUser --stage prod --tail
Monitoring & Debugging in Production
# CloudWatch Alarms
resources:
Resources:
ErrorAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: ${self:service}-errors-${self:provider.stage}
MetricName: Errors
Namespace: AWS/Lambda
Statistic: Sum
Period: 300
EvaluationPeriods: 1
Threshold: 10
ComparisonOperator: GreaterThanThreshold
Dimensions:
- Name: FunctionName
Value: !Ref CreateUserLambdaFunction
Cost Optimization
Track your costs with these practices from maxrohde.com:
- Set Lambda reserved concurrency to prevent runaway costs
- Use DynamoDB on-demand for unpredictable workloads
- Implement request throttling at API Gateway level
- Archive old logs to S3 Glacier
- Monitor with AWS Cost Explorer and set up billing alerts
Common Pitfalls to Avoid
ā Don't: Create nano-services (one function per database operation) ā Do: Group related functionality into cohesive services
ā Don't: Share databases between services ā Do: Each service owns its data
ā Don't: Make synchronous calls between services ā Do: Use async messaging (SQS, EventBridge)
ā Don't: Ignore cold starts ā Do: Optimize bundle size and use provisioned concurrency for critical paths
Wrapping Up
Building microservices with Node.js and AWS is an incredibly powerful combination. Start with one service, learn the patterns, add monitoring, then gradually expand.
Remember: Microservices are a journey, not a destination. Don't try to build everything at once. Start simple, measure everything, and evolve your architecture based on real-world usage patterns.
Next Steps
- Build your first Lambda function
- Add DynamoDB for persistence
- Implement authentication
- Add monitoring and alerts
- Set up CI/CD pipeline
- Scale individual services based on metrics
Happy building! š
Got questions? Drop them in the comments below, and let's learn together!


