· Tutorials  · 5 min read

Creating and Using Unlogged Tables in PostgreSQL

Hey there, fellow developers! Ever found yourself reaching for Redis when all you needed was a quick place to store some temporary data? I’ve been there too. While Redis is awesome (and I use it all the time), sometimes it feels like bringing a swiss army knife when all you need is a spoon. If you’re already running PostgreSQL in your stack, I’ve got a neat trick that might save you some infrastructure headaches: unlogged tables! Let’s dive into how you can use them with Go as a simpler alternative for certain use cases.

🔍 What Are Unlogged Tables?

Think of unlogged tables as PostgreSQL’s way of saying, “I’ll trade some safety for speed.” Unlike their more cautious cousins (regular tables), unlogged tables skip the whole Write-Ahead Logging thing. Here’s what that means for you:

  • Faster writes: Your data gets stored without the usual journaling overhead. Hello, performance boost!
  • No crash safety: If your database decides to take an unexpected nap (crashes), that data is gone. Like tears in rain.
  • No replication: These tables won’t show up in your streaming replication. They’re lone wolves.

Perfect for those times when you need quick storage for stuff you can either rebuild or wouldn’t cry over if it vanished—session data, caches, those in-between results during big data crunching.

⚔️ Redis vs. Unlogged Tables: The Showdown

Redis’s Superpowers

  • In-memory magic: Lightning-fast responses that make everything else seem sluggish.
  • Swiss army knife of data structures: Lists, sets, sorted sets, pub/sub—Redis has it all.
  • Flexible persistence: Want to save your data? Configure it how you like.

When Unlogged Tables Shine

  • One less thing to manage: Already got PostgreSQL? You’re good to go.
  • SQL goodness: Join tables, run complex queries, and keep that ACID compliance where you need it.
  • Handles the chonky data: Got more data than memory? No problem, these tables live on disk.

🤔 When Should You Reach for Unlogged Tables?

  • You’re trying to keep your infrastructure simple (or your ops team is giving you that look).
  • Your data makes more sense in rows and columns, and you want those sweet, sweet SQL queries.
  • A little data loss during crashes wouldn’t end the world—or your career.

🛠️ Tutorial: Building a Session Cache with Unlogged Tables in Go

Step 1: Create Your Speed Demon Table

First, let’s tell PostgreSQL to set up our unlogged table:

CREATE UNLOGGED TABLE sessions (
    id TEXT PRIMARY KEY,
    user_id INT NOT NULL,
    data JSONB,
    expires_at TIMESTAMPTZ NOT NULL
);

Step 2: Connect Your Go App to PostgreSQL

Let’s wire things up with the pgx driver (my personal favorite):

package main

import (
	"context"
	"fmt"
	"github.com/jackc/pgx/v5/pgxpool"
)

func main() {
	connStr := "postgres://user:password@localhost:5432/dbname"
	pool, err := pgxpool.New(context.Background(), connStr)
	if err != nil {
		panic(err)
	}
	defer pool.Close()
	
	// Quick sanity check
	err = pool.Ping(context.Background())
	if err != nil {
		panic(err)
	}
	fmt.Println("Connected to PostgreSQL! We're in business!")
}

Step 3: Save and Retrieve Your Sessions

Drop in a new session:

func createSession(pool *pgxpool.Pool, sessionID string, userID int, data map[string]interface{}, expiresAt time.Time) error {
	query := `
		INSERT INTO sessions (id, user_id, data, expires_at)
		VALUES ($1, $2, $3, $4)`
	_, err := pool.Exec(context.Background(), query, sessionID, userID, data, expiresAt)
	return err
}

Grab that session when you need it:

func getSession(pool *pgxpool.Pool, sessionID string) (userID int, data map[string]interface{}, err error) {
	query := `
		SELECT user_id, data 
		FROM sessions 
		WHERE id = $1 AND expires_at > NOW()`
	err = pool.QueryRow(context.Background(), query, sessionID).Scan(&userID, &data)
	return
}

Step 4: Keep Things Tidy

Let’s add a little janitor function to sweep away expired sessions:

func cleanupSessions(pool *pgxpool.Pool) {
	query := "DELETE FROM sessions WHERE expires_at <= NOW()"
	_, err := pool.Exec(context.Background(), query)
	if err != nil {
		fmt.Printf("Oops, cleanup failed: %v\n", err)
	}
}

// Run this fella periodically with a time.Ticker.

🚀 The Performance Reality Check

So how fast are these unlogged tables? In my testing, they’re about 2-3 times speedier on writes compared to regular tables. That’s nothing to sneeze at! But let’s be real—they’re still writing to disk, so Redis (with its in-memory superpowers) will smoke them for heavy-duty read/write workloads.

If you’re fine with a few milliseconds of latency—like for user sessions where humans won’t notice the difference—unlogged tables could be your new best friend.

⚠️ Watch Out For These Gotchas

  • Expect amnesia: After crashes, that data is gone. Plan accordingly!
  • Vacuum still runs: PostgreSQL still needs to clean up, so configure your autovacuum settings if you’re churning through lots of data.
  • Keep an eye on growth: Monitor your table size with pg_stat_user_tables so things don’t get out of hand.
  • Index wisely: Indexes will speed up your reads but slow down writes. Choose your battles.

🎯 Wrapping It Up

Look, unlogged tables aren’t going to replace Redis for everything—and they shouldn’t! But they’re a neat trick to have up your sleeve, especially when you’re already knee-deep in PostgreSQL and want to keep things simple.

The Go implementation is super straightforward with pgx, and you can slot it right into your existing systems without much fuss. Before you jump in, think about how much data loss you can stomach and what your load patterns look like. If you absolutely need Redis-like speed, stick with Redis—but if simplicity is your jam and PostgreSQL is already part of your stack, give unlogged tables a shot.

Pro tip from a battle-scarred developer: Why not use both? Redis for the stuff that needs lightning-fast responses, and unlogged tables for the bigger chunks of data that play well with SQL. Best of both worlds!

Back to Blog