06.02.2026 • 13 min read

Redis Masterclass | In-Memory Database Patterns for High Performance

Cover Image

Introduction 🎯

Redis isn’t just a cache. It’s an in-memory data structure server that powers real-time applications, handles millions of requests per second, and scales from hobby projects to billion-dollar companies.

This masterclass takes you from “why do I need Redis?” to production patterns that reduce database load, enable real-time features, and make your applications faster and more reliable.

We’ll use Express + TypeScript + ioredis (the best Redis client for Node.js) throughout. All code is production-tested. Let’s build something fast. 🚀


Part 1: Redis Fundamentals

1.1 What is Redis?

Redis (Remote Dictionary Server) is:

  • In-memory: Data lives in RAM, making read/writes extremely fast (~1-100 microseconds)
  • Data structures: Strings, lists, sets, sorted sets, hashes, streams, etc.
  • Persistent: Can save to disk (RDB snapshots or AOF logs)
  • Single-threaded: Simple concurrency model (all operations are atomic)
  • Networked: Works over TCP from any language/framework
Redis speed:
├─ Memory access: ~100 nanoseconds
├─ Redis local: ~1-10 microseconds
├─ Redis over network: ~100-1000 microseconds
├─ Database query: ~1-100 milliseconds
└─ Disk I/O: ~10 milliseconds

Result: Redis is 10,000-1,000,000x faster than databases

1.2 Data Types You’ll Use Most

import Redis from "ioredis";

const redis = new Redis({
  host: "localhost",
  port: 6379,
});

// 📝 STRING: Simple key-value, cache, counters
await redis.set("user:1:name", "Alice");
const name = await redis.get("user:1:name");  // "Alice"

// Atomic counter
await redis.incr("page:views");  // 1, 2, 3, ...
await redis.incrby("page:views", 10);  // Add 10

// Expiration (cache invalidation)
await redis.set("session:abc123", "user_data", "EX", 3600);  // Expires in 1 hour

// 📋 LIST: Ordered collection, job queues, timelines
await redis.rpush("notifications", "msg1", "msg2", "msg3");  // Push right
const first = await redis.lpop("notifications");  // Pop left (FIFO)

// 🏷️ SET: Unique items, followers, tags
await redis.sadd("user:1:followers", 2, 3, 4);
await redis.smembers("user:1:followers");  // [2, 3, 4]
await redis.sismember("user:1:followers", 2);  // true

// 🔢 SORTED SET: Ranking, leaderboards, time-series
await redis.zadd("leaderboard", 100, "alice", 95, "bob", 88, "charlie");
await redis.zrevrange("leaderboard", 0, 2);  // Top 3: [alice, bob, charlie]
await redis.zrank("leaderboard", "bob");  // Rank position: 2

// 🔗 HASH: Objects, user profiles, settings
await redis.hset("user:1", "email", "alice@example.com", "name", "Alice");
const user = await redis.hgetall("user:1");  // { email: "...", name: "Alice" }

// 🌊 STREAM: Event log, time-series, pub/sub replacement
await redis.xadd("events", "*", "event", "login", "userId", "1");
const events = await redis.xread("COUNT", 10, "STREAMS", "events", "0");

1.3 Connection Management

import Redis from "ioredis";

// Basic connection
const redis = new Redis({
  host: process.env.REDIS_HOST || "localhost",
  port: parseInt(process.env.REDIS_PORT || "6379"),
  password: process.env.REDIS_PASSWORD,
  db: 0,  // Database number (0-15)
  retryStrategy: (times) => {
    const delay = Math.min(times * 50, 2000);
    return delay;
  },
});

// Connection events
redis.on("connect", () => {
  console.log("✅ Redis connected");
});

redis.on("error", (err) => {
  console.error("❌ Redis error:", err);
});

redis.on("close", () => {
  console.log("Redis connection closed");
});

// Graceful shutdown
process.on("SIGINT", async () => {
  await redis.quit();  // Close connection cleanly
  process.exit(0);
});

Part 2: Caching Patterns

2.1 Cache-Aside (Lazy Loading)

Check cache first; if miss, load from database and cache:

import express, { Request, Response } from "express";

interface User {
  id: number;
  email: string;
  name: string;
}

async function getUserWithCache(userId: number): Promise<User> {
  const cacheKey = `user:${userId}`;

  // 1️⃣ Try cache first
  const cached = await redis.get(cacheKey);
  if (cached) {
    console.log(`✅ Cache hit for ${cacheKey}`);
    return JSON.parse(cached);
  }

  // 2️⃣ Cache miss—load from database
  console.log(`❌ Cache miss for ${cacheKey}—querying database`);
  const user = await db.users.findOne({ id: userId });

  if (!user) {
    throw new Error("User not found");
  }

  // 3️⃣ Store in cache (1 hour TTL)
  await redis.setex(cacheKey, 3600, JSON.stringify(user));

  return user;
}

const app = express();

app.get<{ id: string }>(
  "/api/users/:id",
  async (req: Request<{ id: string }>, res: Response) => {
    try {
      const user = await getUserWithCache(parseInt(req.params.id));
      res.json(user);
    } catch (err) {
      res.status(404).json({ error: "User not found" });
    }
  }
);

When to use: General caching, user profiles, products. Simple and effective.

2.2 Cache-Invalidation on Write

When data changes, invalidate the cache:

async function updateUser(userId: number, updates: Partial<User>): Promise<User> {
  // 1️⃣ Update database
  const user = await db.users.update({ id: userId }, updates);

  // 2️⃣ Invalidate cache
  await redis.del(`user:${userId}`);

  // Optional: Broadcast invalidation to other servers
  await redis.publish("cache-invalidation", JSON.stringify({ type: "user", id: userId }));

  return user;
}

const app = express();

app.patch<{ id: string }>(
  "/api/users/:id",
  async (req: Request<{ id: string }>, res: Response) => {
    const user = await updateUser(parseInt(req.params.id), req.body);
    res.json(user);
  }
);

Critical rule: Every write that affects cached data must invalidate the cache.

2.3 Write-Through Caching

Update database AND cache atomically:

async function createPost(userId: number, title: string, content: string) {
  // 1️⃣ Write to database
  const post = await db.posts.create({ userId, title, content });

  // 2️⃣ Add to Redis
  const userPostsKey = `user:${userId}:posts`;
  await redis.lpush(userPostsKey, JSON.stringify(post));
  await redis.ltrim(userPostsKey, 0, 49);  // Keep last 50

  // 3️⃣ Update user's post count
  const userStatsKey = `user:${userId}:stats`;
  await redis.hincrby(userStatsKey, "postCount", 1);
  await redis.expire(userStatsKey, 86400);  // 24 hours

  return post;
}

app.post<never, any, { title: string; content: string }>(
  "/api/posts",
  authenticateUser,
  async (req: Request<never, any, { title: string; content: string }>, res: Response) => {
    const post = await createPost(req.user!.id, req.body.title, req.body.content);
    res.status(201).json(post);
  }
);

When to use: Real-time counters, activity feeds, leaderboards. Consistency is critical.

2.4 Probabilistic Early Expiration

Avoid thundering herd (all cache misses at once):

async function smartCacheGet(key: string, ttl: number, loader: () => Promise<any>) {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);

  const data = await loader();

  // Add random jitter to prevent all expiring together
  const jitter = Math.random() * 0.1 * ttl;  // ±10% variance
  const actualTtl = ttl + jitter;

  await redis.setex(key, Math.floor(actualTtl), JSON.stringify(data));
  return data;
}

// Usage
const user = await smartCacheGet(
  `user:${userId}`,
  3600,
  () => db.users.findOne({ id: userId })
);

Part 3: Session Management

3.1 Session Storage with Express

import RedisStore from "connect-redis";
import session from "express-session";

const redisClient = new Redis();
const redisStore = new RedisStore({ client: redisClient });

app.use(
  session({
    store: redisStore,
    secret: process.env.SESSION_SECRET || "default-secret",
    resave: false,
    saveUninitialized: false,
    cookie: {
      secure: process.env.NODE_ENV === "production",  // HTTPS only
      httpOnly: true,  // JS can't access
      sameSite: "strict",  // CSRF protection
      maxAge: 1000 * 60 * 60 * 24,  // 24 hours
    },
  })
);

// Usage
app.post<never, any, { email: string; password: string }>(
  "/api/login",
  async (req: Request<never, any, { email: string; password: string }>, res: Response) => {
    const user = await authenticateUser(req.body.email, req.body.password);

    if (!user) {
      return res.status(401).json({ error: "Invalid credentials" });
    }

    // Session data is stored in Redis automatically
    req.session.userId = user.id;
    req.session.userEmail = user.email;

    res.json({ message: "Logged in" });
  }
);

app.get<never, any>(
  "/api/me",
  (req: Request, res: Response) => {
    if (!req.session.userId) {
      return res.status(401).json({ error: "Not logged in" });
    }

    res.json({ userId: req.session.userId, email: req.session.userEmail });
  }
);

app.post<never, any>(
  "/api/logout",
  (req: Request, res: Response) => {
    req.session.destroy((err) => {
      if (err) {
        return res.status(500).json({ error: "Logout failed" });
      }
      res.json({ message: "Logged out" });
    });
  }
);

3.2 Token Blacklist (JWT Revocation)

async function revokeToken(token: string, expiresAt: Date) {
  const ttlSeconds = Math.ceil((expiresAt.getTime() - Date.now()) / 1000);

  if (ttlSeconds > 0) {
    await redis.setex(`blacklist:${token}`, ttlSeconds, "true");
  }
}

async function isTokenBlacklisted(token: string): Promise<boolean> {
  const result = await redis.get(`blacklist:${token}`);
  return result !== null;
}

// Middleware
const verifyToken = async (req: Request, res: Response, next: NextFunction) => {
  const token = req.headers.authorization?.split(" ")[1];

  if (!token) {
    return res.status(401).json({ error: "Missing token" });
  }

  const blacklisted = await isTokenBlacklisted(token);
  if (blacklisted) {
    return res.status(403).json({ error: "Token revoked" });
  }

  try {
    const decoded = verifyJWT(token);
    req.userId = decoded.id;
    next();
  } catch (err) {
    res.status(403).json({ error: "Invalid token" });
  }
};

// Logout endpoint
app.post<never, any>(
  "/api/logout",
  verifyToken,
  async (req: Request, res: Response) => {
    const token = req.headers.authorization!.split(" ")[1];
    const decoded = verifyJWT(token);

    await revokeToken(token, new Date(decoded.exp * 1000));

    res.json({ message: "Logged out" });
  }
);

Part 4: Job Queues & Background Processing

4.1 Simple Queue Implementation

interface Job {
  id: string;
  type: "email" | "notification" | "report";
  data: any;
  retries: number;
  maxRetries: number;
}

class JobQueue {
  constructor(private redis: Redis) {}

  // 📤 Enqueue a job
  async enqueue(type: Job["type"], data: any): Promise<string> {
    const jobId = `job:${Date.now()}:${Math.random()}`;

    const job: Job = {
      id: jobId,
      type,
      data,
      retries: 0,
      maxRetries: 3,
    };

    await this.redis.rpush("jobs:pending", JSON.stringify(job));
    return jobId;
  }

  // 🔄 Get next job
  async dequeue(): Promise<Job | null> {
    const jobStr = await this.redis.lpop("jobs:pending");
    if (!jobStr) return null;

    return JSON.parse(jobStr);
  }

  // ✅ Mark job complete
  async complete(jobId: string): Promise<void> {
    await this.redis.hset("jobs:completed", jobId, Date.now());
  }

  // 🔁 Requeue with backoff
  async retry(job: Job): Promise<void> {
    job.retries++;

    if (job.retries >= job.maxRetries) {
      await this.redis.hset("jobs:failed", job.id, JSON.stringify(job));
      return;
    }

    // Exponential backoff: 2^retries seconds
    const delaySeconds = Math.pow(2, job.retries);
    await this.redis.rpush("jobs:pending", JSON.stringify(job));
  }
}

// Worker process
const jobQueue = new JobQueue(redis);

async function startWorker() {
  while (true) {
    const job = await jobQueue.dequeue();

    if (!job) {
      await new Promise((resolve) => setTimeout(resolve, 1000));
      continue;
    }

    try {
      if (job.type === "email") {
        await sendEmail(job.data);
      } else if (job.type === "notification") {
        await sendNotification(job.data);
      } else if (job.type === "report") {
        await generateReport(job.data);
      }

      await jobQueue.complete(job.id);
      console.log(`✅ Job ${job.id} completed`);
    } catch (err) {
      console.error(`❌ Job ${job.id} failed:`, err);
      await jobQueue.retry(job);
    }
  }
}

// Start worker
startWorker().catch(console.error);

// API endpoint to queue jobs
app.post<never, any, { email: string; subject: string }>(
  "/api/send-email",
  authenticateUser,
  async (req: Request<never, any, { email: string; subject: string }>, res: Response) => {
    const jobId = await jobQueue.enqueue("email", {
      to: req.body.email,
      subject: req.body.subject,
    });

    res.json({ jobId, message: "Email queued" });
  }
);

4.2 Production: Bull Queue

For production, use Bull—a battle-tested queue library:

import Queue from "bull";

const emailQueue = new Queue("emails", {
  redis: {
    host: process.env.REDIS_HOST,
    port: parseInt(process.env.REDIS_PORT || "6379"),
  },
});

// 📤 Add job
app.post<never, any, { email: string }>(
  "/api/send-email",
  async (req: Request<never, any, { email: string }>, res: Response) => {
    const job = await emailQueue.add(
      { to: req.body.email, subject: "Welcome!" },
      { attempts: 3, backoff: { type: "exponential", delay: 2000 } }
    );

    res.json({ jobId: job.id });
  }
);

// 🔄 Process jobs
emailQueue.process(async (job) => {
  await sendEmail(job.data.to, job.data.subject);
});

// 📊 Monitor
emailQueue.on("completed", (job) => {
  console.log(`✅ Job ${job.id} completed`);
});

emailQueue.on("failed", (job, err) => {
  console.error(`❌ Job ${job.id} failed:`, err.message);
});

Part 5: Real-Time Features

5.1 Pub/Sub Pattern

// Subscriber (listener)
const subscriber = new Redis();

subscriber.subscribe("notifications", (err, count) => {
  if (err) console.error("Failed to subscribe:", err);
  else console.log(`Subscribed to ${count} channels`);
});

subscriber.on("message", (channel, message) => {
  console.log(`[${channel}] ${message}`);

  // Broadcast to WebSocket clients
  const data = JSON.parse(message);
  broadcastToUsers(data.userIds, data);
});

// Publisher (sender)
async function notifyUser(userId: number, message: string) {
  await redis.publish("notifications", JSON.stringify({ userIds: [userId], message }));
}

// API endpoint
app.post<never, any, { userId: number; message: string }>(
  "/api/notify",
  async (req: Request<never, any, { userId: number; message: string }>, res: Response) => {
    await notifyUser(req.body.userId, req.body.message);
    res.json({ sent: true });
  }
);

5.2 Real-Time Activity Feed with Streams

const FEED_STREAM = "user:feeds";

// Add activity
async function addActivity(userId: number, action: string, targetId: number) {
  const streamKey = `feed:${userId}`;

  await redis.xadd(
    streamKey,
    "*",
    "action",
    action,
    "targetId",
    targetId,
    "timestamp",
    Date.now()
  );

  // Keep only last 1000 entries
  await redis.xtrim(streamKey, "MAXLEN", "~", 1000);
}

// Fetch activity feed
async function getActivityFeed(userId: number, count: number = 50) {
  const streamKey = `feed:${userId}`;

  const entries = await redis.xrevrange(streamKey, "+", "-", "COUNT", count);

  return entries.map(([id, data]) => {
    const values = data;
    return {
      id,
      action: values[1],  // After key "action"
      targetId: parseInt(values[3]),
      timestamp: parseInt(values[5]),
    };
  });
}

// API endpoint
app.get<never, any>(
  "/api/feed",
  authenticateUser,
  async (req: Request, res: Response) => {
    const feed = await getActivityFeed(req.userId!, 50);
    res.json(feed);
  }
);

Part 6: Advanced Patterns

6.1 Distributed Locks

async function acquireLock(
  key: string,
  ttlSeconds: number
): Promise<string | null> {
  const lockId = `${Date.now()}:${Math.random()}`;

  // NX: Only set if not exists
  const result = await redis.set(key, lockId, "EX", ttlSeconds, "NX");

  return result ? lockId : null;
}

async function releaseLock(key: string, lockId: string): Promise<boolean> {
  // Lua script: check value before deleting (prevent race condition)
  const script = `
    if redis.call("get", KEYS[1]) == ARGV[1] then
      return redis.call("del", KEYS[1])
    else
      return 0
    end
  `;

  const result = await redis.eval(script, 1, key, lockId);
  return result === 1;
}

// Usage: Prevent concurrent updates
app.patch<{ id: string }>(
  "/api/users/:id",
  async (req: Request<{ id: string }>, res: Response) => {
    const lockKey = `user:${req.params.id}:lock`;
    const lockId = await acquireLock(lockKey, 10);

    if (!lockId) {
      return res.status(409).json({ error: "Resource is being updated" });
    }

    try {
      const user = await db.users.update({ id: parseInt(req.params.id) }, req.body);
      res.json(user);
    } finally {
      await releaseLock(lockKey, lockId);
    }
  }
);

6.2 Rate Limiting

async function rateLimit(userId: number, limit: number, windowSeconds: number): Promise<boolean> {
  const key = `rate-limit:${userId}`;

  const current = await redis.incr(key);

  if (current === 1) {
    await redis.expire(key, windowSeconds);
  }

  return current <= limit;
}

// Middleware
const rateLimitMiddleware = async (req: Request, res: Response, next: NextFunction) => {
  const allowed = await rateLimit(req.userId || 0, 100, 60);  // 100 requests/minute

  if (!allowed) {
    return res.status(429).json({ error: "Rate limit exceeded" });
  }

  next();
};

app.use(rateLimitMiddleware);

6.3 Distributed Counters

// Track page views across multiple servers
async function incrementPageView(pageId: string) {
  const key = `page:${pageId}:views`;

  // Atomic increment
  const count = await redis.incr(key);

  // Batch write to database every 1000 views
  if (count % 1000 === 0) {
    await db.pages.update({ id: pageId }, { views: count });
  }

  return count;
}

// Get current count
async function getPageViews(pageId: string): Promise<number> {
  const cached = await redis.get(`page:${pageId}:views`);
  return cached ? parseInt(cached) : 0;
}

app.get<{ id: string }>(
  "/api/pages/:id",
  async (req: Request<{ id: string }>, res: Response) => {
    const views = await incrementPageView(req.params.id);
    const page = await db.pages.findOne({ id: req.params.id });

    res.json({ ...page, views });
  }
);

Part 7: Monitoring and Debugging

7.1 Redis Metrics

import pino from "pino";

const logger = pino();

// Track Redis performance
class RedisMetrics {
  private metrics = {
    commands: 0,
    errors: 0,
    totalTime: 0,
  };

  track(command: string, durationMs: number, error: boolean) {
    this.metrics.commands++;
    if (error) this.metrics.errors++;
    this.metrics.totalTime += durationMs;

    const avgTime = this.metrics.totalTime / this.metrics.commands;
    const errorRate = (this.metrics.errors / this.metrics.commands) * 100;

    if (this.metrics.commands % 1000 === 0) {
      logger.info({
        redis: {
          commands: this.metrics.commands,
          avgTime: avgTime.toFixed(2) + "ms",
          errorRate: errorRate.toFixed(2) + "%",
        },
      });
    }
  }
}

const metrics = new RedisMetrics();

// Wrap redis commands
const originalCall = redis.call.bind(redis);
redis.call = async function (...args: any[]) {
  const start = Date.now();
  try {
    const result = await originalCall(...args);
    metrics.track(String(args[0]), Date.now() - start, false);
    return result;
  } catch (err) {
    metrics.track(String(args[0]), Date.now() - start, true);
    throw err;
  }
};

7.2 Common Issues and Fixes

 Problem: High memory usage
 Solution:
  1. Check data types (are you storing objects in lists?)
  2. Reduce TTL/expiration times
  3. Use Redis memory eviction: MAXMEMORY-POLICY
  4. Implement data archival to database

 Problem: Slow commands
 Solution:
  1. Use SLOWLOG GET to identify
  2. Avoid KEYS pattern matching (use SCAN instead)
  3. Avoid large value operations
  4. Monitor network latency (redis-benchmark)

 Problem: Data loss after crash
 Solution:
  1. Enable persistence (AOF > RDB)
  2. Use Redis Cluster for replication
  3. Back up regularly to S3
  4. Accept that cache is temporary

🎯 Redis Best Practices Checklist

 Use namespaced keys: `user:123:profile` not just `profile`
 Always set TTL on cache keys
 Use pipelining for multiple commands
 Monitor memory and implement eviction policies
 Handle cache misses gracefully
 Use transactions (MULTI/EXEC) for atomic operations
 Implement proper error handling and retries
 Use sorted sets for leaderboards/rankings
 Implement distributed locks for critical sections
 Monitor slow queries (SLOWLOG)

 Don't store sensitive data without encryption
❌ Don't use KEYS in production (O(n) complexity)
 Don't rely on Redis without backup strategy
❌ Don't ignore eviction warnings
 Don't assume cache is always available

Conclusion 🚀

Redis transforms slow applications into lightning-fast ones. Start with caching (cache-aside pattern), add sessions for user state, implement queues for background work, and layer on real-time features as you scale.

The patterns in this guide—from simple caching to distributed locks—work at any scale. Master them, and you have tools for building everything from hobby projects to massive distributed systems.

Redis isn’t optional for modern backends. It’s essential. 💨