Serverless — Functions Without Servers
Run code in response to events — no server management, automatic scaling, pay per invocation.
When to use
- Event-driven workloads (webhooks, async processing, scheduled jobs)
- Spiky or infrequent traffic (scales to zero, no idle cost)
- Background processing that does not need low latency
When NOT to use
- Long-running processes (>15 min timeout on most platforms)
- Latency-sensitive hot paths (cold starts can be 100ms–3s)
- Stateful workloads
Tradeoffs
- Cold starts: significant for JVM, moderate for Node/Python, minimal for Go
- Vendor lock-in on trigger types, runtime config, and resource limits
- Go
- Python
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func handler(ctx context.Context, req events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
userID := req.QueryStringParameters["user_id"]
if userID == "" {
return events.APIGatewayProxyResponse{StatusCode: 400, Body: "missing user_id"}, nil
}
result := fmt.Sprintf(`{"user_id": "%s", "status": "ok"}`, userID)
return events.APIGatewayProxyResponse{StatusCode: 200, Body: result}, nil
}
func main() { lambda.Start(handler) }
import json
def handler(event: dict, context) -> dict:
user_id = event.get("queryStringParameters", {}).get("user_id")
if not user_id:
return {"statusCode": 400, "body": "missing user_id"}
# perform work — call DB, publish event, etc.
result = {"user_id": user_id, "status": "ok"}
return {
"statusCode": 200,
"body": json.dumps(result),
}
Gotcha: Cold starts are worst with large runtimes and large deployment packages. Go has the fastest cold starts (~50ms). JVM is worst (~3s). Use provisioned concurrency for latency-critical serverless endpoints.