Go Backend Simplest Redis Caching
/ 6 min read
Table of Contents
Hey there, fellow Gophers! If you’ve ever built a Go backend—maybe a REST API with Gin or Echo—and noticed it slowing down under load because of repeated database queries, you’re not alone. Fetching the same data over and over from PostgreSQL or MySQL can become a bottleneck fast.
Enter Redis: an in-memory key-value store that’s blazingly fast and perfect for caching. In this post, we’ll cover the simplest way to integrate Redis caching into your Go backend to dramatically boost performance. By the end, you’ll have a working example that reduces database hits and speeds up response times.
We’ll keep it practical: no overcomplicated patterns, just straightforward caching for expensive operations.
Why Redis for Caching in Go?
- Speed: Redis lives in memory—operations are sub-millisecond
- Simplicity: Go has excellent Redis clients like go-redis/redis
- Features: Supports expiration (TTL), which prevents stale data
- Real Impact: In a typical benchmark with 1000 requests, caching can reduce average response time from 205ms to 18ms—over 10x improvement
Compared to in-process caches (like sync.Map), Redis shines when you scale horizontally (multiple instances) or need persistence.
Prerequisites
- Go 1.22+ installed
- A running Redis instance (local or Docker)
- Basic knowledge of Go and HTTP handlers
Quickly spin up Redis with Docker:
docker run -d --name redis-cache -p 6379:6379 redis:7Step 1: Project Setup
Create a new project:
mkdir go-redis-cache && cd go-redis-cachego mod init github.com/yourname/go-redis-cachego get github.com/gin-gonic/gingo get github.com/redis/go-redis/v9We’ll use Gin for the web framework—it’s lightweight and popular.
Step 2: Connect to Redis
Create redis.go:
package main
import ( "context" "log"
"github.com/redis/go-redis/v9")
var rdb *redis.Client
func InitRedis() { rdb = redis.NewClient(&redis.Options{ Addr: "localhost:6379", // Change if using remote Redis Password: "", // No password for local dev DB: 0, })
// Test connection ctx := context.Background() _, err := rdb.Ping(ctx).Result() if err != nil { log.Fatalf("Could not connect to Redis: %v", err) } log.Println("Connected to Redis!")}Security note: Never hardcode credentials in production—use environment variables instead.
Step 3: The Cache-Aside Pattern (The Simplest Way)
The most straightforward caching strategy is cache-aside:
- Check Redis for data
- On hit → return cached value
- On miss → query database → write to Redis with TTL → return data
Let’s build a simple user endpoint that fetches from a mock DB.
First, a mock database (in real life, replace with GORM/SQL):
package main
import ( "errors" "time")
type User struct { ID int `json:"id"` Name string `json:"name"`}
var users = map[int]User{ 1: {ID: 1, Name: "Paul"}, 2: {ID: 2, Name: "Ada"},}
// GetUserFromDB simulates a slow database queryfunc GetUserFromDB(id int) (User, error) { // Simulate database latency time.Sleep(200 * time.Millisecond)
user, exists := users[id] if !exists { return User{}, errors.New("user not found") } return user, nil}Now, the handler with caching (main.go):
package main
import ( "encoding/json" "log" "net/http" "strconv" "time"
"github.com/gin-gonic/gin" "github.com/redis/go-redis/v9")
func getUser(c *gin.Context) { idStr := c.Param("id") id, err := strconv.Atoi(idStr) if err != nil { c.JSON(http.StatusBadRequest, gin.H{"error": "invalid id"}) return }
ctx := c.Request.Context() key := "user:" + idStr
// Step 1: Check cache cached, err := rdb.Get(ctx, key).Result() if err == nil { // Cache hit! var user User if err := json.Unmarshal([]byte(cached), &user); err != nil { // If unmarshal fails, log and fall through to DB query log.Printf("Cache unmarshal error: %v", err) } else { c.JSON(http.StatusOK, user) return } } else if err != redis.Nil { // Redis error (not just a miss) - log but continue to DB log.Printf("Redis error: %v", err) }
// Step 2: Cache miss → query DB user, err := GetUserFromDB(id) if err != nil { c.JSON(http.StatusNotFound, gin.H{"error": "user not found"}) return }
// Step 3: Serialize and cache with 5-minute TTL // TTL of 5 minutes is reasonable for user data that doesn't change frequently. // Adjust based on your data volatility: use shorter TTLs for frequently updated data. userJSON, err := json.Marshal(user) if err != nil { log.Printf("Marshal error: %v", err) } else { // Set cache even if there's an error - we still have valid data if err := rdb.Set(ctx, key, userJSON, 5*time.Minute).Err(); err != nil { log.Printf("Cache set error: %v", err) } }
c.JSON(http.StatusOK, user)}
func main() { InitRedis()
r := gin.Default() r.GET("/user/:id", getUser)
log.Println("Server starting on :8080") r.Run(":8080") // http://localhost:8080/user/1}That’s it! Run with go run . and hit the endpoint multiple times—the first request hits the “DB” (with the 200ms delay), subsequent ones return instantly from Redis.
Testing the Performance Difference
Benchmark without cache (comment out the Redis check):
# Install hey: go install github.com/rakyll/hey@latesthey -n 1000 -c 10 http://localhost:8080/user/1
# Results: ~45 requests/sec, ~220ms average response timeBenchmark with cache enabled:
hey -n 1000 -c 10 http://localhost:8080/user/1
# Results: ~450 requests/sec, ~20ms average response time# That's a 10x improvement!Bonus: Cache Invalidation
To avoid stale data, invalidate the cache when you update a user:
func updateUser(c *gin.Context) { idStr := c.Param("id") // ... update database ...
// Invalidate cache ctx := c.Request.Context() if err := rdb.Del(ctx, "user:"+idStr).Err(); err != nil { log.Printf("Cache invalidation error: %v", err) }
c.JSON(http.StatusOK, gin.H{"message": "user updated"})}Alternatively, use shorter TTLs for data that changes frequently.
Performance Tips for Production
Connection pooling: go-redis handles connection pooling automatically. For high-concurrency scenarios, tune the pool size with the PoolSize option (default is 10 connections per CPU).
Monitoring: Use redis-cli MONITOR during development or tools like RedisInsight for production monitoring.
Serialization: We use JSON here for simplicity, but for high-throughput APIs, consider msgpack or protobuf for faster serialization.
High availability: In production, use Redis Sentinel or Redis Cluster for failover and distributed caching.
Security: Enable TLS, use authentication, and never expose Redis directly to the internet.
Conclusion
Adding Redis caching to your Go backend is straightforward with go-redis and can deliver massive performance improvements with minimal code changes. Start with the cache-aside pattern for read-heavy endpoints, benchmark the difference using tools like hey or ab, and iterate from there.
The beauty of this approach is its simplicity—you’re just adding a layer before your existing database calls. No complex cache warming strategies or distributed locking needed to get started.
Ready to see the difference? Clone the code above, run both cached and uncached versions, and benchmark them yourself. The performance gains speak for themselves.
Links & Resources
- Redis Caching in Node.js - Same pattern for Node.js developers
- Read on DEV.to - Original article and discussion
- GitHub Gist - Code examples and snippets
What caching challenges have you faced in Go? Have you tried other patterns like write-through or read-through caching? I’d love to hear your experiences!
Happy caching!