Serverless Databases: Options and Trade-offs

July 12, 2021

Serverless compute has matured, but data often remains the bottleneck. Traditional databases require provisioned capacity, connection pooling, and careful scaling. Serverless databases promise to solve this—but each option has significant trade-offs.

Here’s how to navigate serverless database choices.

The Serverless Database Challenge

Why Traditional Databases Don’t Fit

traditional_database_problems:
  connection_limits:
    - Lambda functions scale to thousands
    - PostgreSQL has hundreds of connections
    - Connection pooling helps but has limits

  cold_start_latency:
    - Opening database connections is slow
    - SSL handshake adds time
    - Connection reuse is unreliable

  cost_model:
    - Pay for provisioned capacity
    - Under-provision: performance issues
    - Over-provision: wasted money

  scaling:
    - Manual read replicas
    - Complex sharding
    - Downtime for upgrades

What Serverless Databases Offer

serverless_characteristics:
  scale_to_zero:
    - No cost when idle
    - Important for dev/staging
    - Variable workloads

  automatic_scaling:
    - Handle traffic spikes
    - No capacity planning
    - Pay for actual usage

  connection_handling:
    - HTTP APIs or connection pooling
    - Works with ephemeral compute
    - No connection exhaustion

DynamoDB

When to Use

dynamodb_fit:
  good_for:
    - Key-value access patterns
    - Known query patterns
    - High scale, low latency
    - Truly serverless (no connections)

  not_for:
    - Ad-hoc queries
    - Complex joins
    - Unknown access patterns
    - Relational data

Access Patterns

// DynamoDB - design for access patterns

// Table design
const tableSchema = {
  TableName: 'Orders',
  KeySchema: [
    { AttributeName: 'PK', KeyType: 'HASH' },
    { AttributeName: 'SK', KeyType: 'RANGE' }
  ],
  GlobalSecondaryIndexes: [
    {
      IndexName: 'GSI1',
      KeySchema: [
        { AttributeName: 'GSI1PK', KeyType: 'HASH' },
        { AttributeName: 'GSI1SK', KeyType: 'RANGE' }
      ]
    }
  ]
};

// Single table design
// Orders: PK=ORDER#123, SK=ORDER#123
// Order items: PK=ORDER#123, SK=ITEM#001
// Customer orders: GSI1PK=CUSTOMER#456, GSI1SK=ORDER#123

// Get order with items - single query
const result = await dynamodb.query({
  TableName: 'Orders',
  KeyConditionExpression: 'PK = :pk',
  ExpressionAttributeValues: { ':pk': `ORDER#${orderId}` }
}).promise();

// Customer's orders - GSI query
const customerOrders = await dynamodb.query({
  TableName: 'Orders',
  IndexName: 'GSI1',
  KeyConditionExpression: 'GSI1PK = :pk',
  ExpressionAttributeValues: { ':pk': `CUSTOMER#${customerId}` }
}).promise();

Pricing Model

dynamodb_pricing:
  on_demand:
    reads: $1.25 per million read request units
    writes: $1.25 per million write request units
    pros: True pay-per-use
    cons: More expensive at steady load

  provisioned:
    reads: ~$0.09 per RCU/month
    writes: ~$0.47 per WCU/month
    pros: Cheaper at scale
    cons: Capacity planning needed

  recommendation:
    - Start with on-demand
    - Switch to provisioned at scale
    - Use auto-scaling with provisioned

Aurora Serverless v2

When to Use

aurora_serverless_fit:
  good_for:
    - MySQL/PostgreSQL compatibility
    - Variable workloads
    - Unknown or changing access patterns
    - Relational data

  not_for:
    - Truly serverless (needs VPC)
    - Scale to zero (v2 doesn't)
    - Lambda without VPC

Configuration

# Aurora Serverless v2 CloudFormation
AuroraCluster:
  Type: AWS::RDS::DBCluster
  Properties:
    Engine: aurora-postgresql
    EngineMode: provisioned  # v2 uses provisioned mode
    ServerlessV2ScalingConfiguration:
      MinCapacity: 0.5  # Minimum ACUs
      MaxCapacity: 16   # Maximum ACUs
    DBClusterIdentifier: my-serverless-db
    MasterUsername: admin
    MasterUserPassword: !Ref DBPassword
    VpcSecurityGroupIds:
      - !Ref DBSecurityGroup
    DBSubnetGroupName: !Ref DBSubnetGroup

AuroraInstance:
  Type: AWS::RDS::DBInstance
  Properties:
    DBClusterIdentifier: !Ref AuroraCluster
    DBInstanceClass: db.serverless
    Engine: aurora-postgresql

Connection Handling

// RDS Proxy for connection pooling
const { SecretsManager } = require('aws-sdk');
const { Pool } = require('pg');

// Get credentials from Secrets Manager
async function getConnection() {
  const secretsManager = new SecretsManager();
  const secret = await secretsManager.getSecretValue({
    SecretId: process.env.DB_SECRET_ARN
  }).promise();

  const credentials = JSON.parse(secret.SecretString);

  // Connect through RDS Proxy
  return new Pool({
    host: process.env.RDS_PROXY_ENDPOINT,
    user: credentials.username,
    password: credentials.password,
    database: 'mydb',
    max: 1,  // Lambda: one connection per instance
    ssl: { rejectUnauthorized: false }
  });
}

PlanetScale

When to Use

planetscale_fit:
  good_for:
    - MySQL compatibility
    - Branching workflow (like git)
    - Schema migrations without downtime
    - Distributed global deployment

  not_for:
    - PostgreSQL requirement
    - Self-hosted requirement
    - Very low latency (adds network hop)

Branching Workflow

# PlanetScale branching for schema changes

# Create development branch
pscale branch create mydb feature-add-users

# Make schema changes on branch
pscale shell mydb feature-add-users
# > CREATE TABLE users (id INT PRIMARY KEY, email VARCHAR(255));

# Create deploy request (like PR)
pscale deploy-request create mydb feature-add-users

# Merge to main (online, no downtime)
pscale deploy-request deploy mydb 1

Serverless Driver

// PlanetScale serverless driver - HTTP based
import { connect } from '@planetscale/database';

const conn = connect({
  host: process.env.DATABASE_HOST,
  username: process.env.DATABASE_USERNAME,
  password: process.env.DATABASE_PASSWORD
});

// Works perfectly in serverless
export async function handler(event) {
  const results = await conn.execute(
    'SELECT * FROM users WHERE id = ?',
    [event.userId]
  );

  return { statusCode: 200, body: JSON.stringify(results.rows) };
}

Fauna

When to Use

fauna_fit:
  good_for:
    - Global distribution built-in
    - Strong consistency globally
    - GraphQL native
    - Truly serverless (HTTP API)

  not_for:
    - SQL requirement
    - Cost-sensitive workloads
    - Simple key-value patterns

Query Example

import { Client, query as q } from 'faunadb';

const client = new Client({ secret: process.env.FAUNA_SECRET });

// Create document
await client.query(
  q.Create(
    q.Collection('orders'),
    {
      data: {
        customerId: 'cust_123',
        items: [{ sku: 'ABC', qty: 2 }],
        total: 99.99,
        createdAt: q.Now()
      }
    }
  )
);

// Query with index
const result = await client.query(
  q.Map(
    q.Paginate(
      q.Match(q.Index('orders_by_customer'), 'cust_123')
    ),
    q.Lambda('ref', q.Get(q.Var('ref')))
  )
);

Comparison Matrix

comparison:
  dynamodb:
    model: Key-value/Document
    sql: No
    scale_to_zero: Yes
    global: Yes (Global Tables)
    connections: HTTP API
    pricing: Per request
    best_for: Known access patterns, massive scale

  aurora_serverless:
    model: Relational
    sql: Yes (MySQL/PostgreSQL)
    scale_to_zero: v1 only (deprecated)
    global: No
    connections: Requires proxy
    pricing: Per ACU-hour
    best_for: Relational data, variable load

  planetscale:
    model: Relational
    sql: Yes (MySQL)
    scale_to_zero: Yes
    global: Yes
    connections: HTTP driver
    pricing: Per row read/written
    best_for: Schema evolution, MySQL compatibility

  fauna:
    model: Document
    sql: No (FQL/GraphQL)
    scale_to_zero: Yes
    global: Yes
    connections: HTTP API
    pricing: Per operation
    best_for: Global apps, GraphQL

Decision Framework

decision_tree:
  need_sql:
    yes:
      need_postgresql: Aurora Serverless + RDS Proxy
      need_mysql:
        schema_migrations_important: PlanetScale
        variable_load: Aurora Serverless
        simple_needs: PlanetScale
    no:
      known_access_patterns:
        high_scale: DynamoDB
        global_consistency: Fauna
      flexible_queries: Fauna

  cost_sensitive:
    development: DynamoDB on-demand, PlanetScale free tier
    production: DynamoDB provisioned, Aurora (at scale)

Key Takeaways

The right choice depends on your access patterns, SQL requirements, and operational preferences. There’s no universal answer.