Running Redis Cluster Locally in Docker for Local Testing

In this scenario, we need a Redis cluster running locally to allow local testing and debugging with Redis client connections for an application. However, we want to do so without having to set up a mess of infrastructure in a data center or within a cloud configuration. This solution will allow us to set up a Redis cluster within a Docker container on Mac OS and have it usable for local development.


Doing local development and having access to many of the backend components running locally is a beautiful thing. It means an engineer can have the freedom to create amazing things without having to clear any changes with a change management team, or other team members who may be working on the same system. Developing properly running software never has negative effects on those backends, but in the slight chance it does, the engineer can rest assured that those won’t affect other engineers because they’ve isolated their working environment.

Docker is a tool that allows you to package up software and all its dependencies, such as libraries and settings, into a single package called a “container.” Think of it as a self-contained, lightweight virtual machine that can run on any computer, whether it’s a laptop, server or even in the cloud. It’s great for running database engines like PostgreSQL and MySQL, or running NoSQL databases like MongoDB, or even running the software being developed in its own container to keep it isolated.

Redis is super-fast software that stores and retrieves data quickly, like a turbocharged version of a regular database. It keeps data in memory for lightning-fast access and is great for applications that need speed and efficiency. In this case, Redis is being used as an in-memory data structure store used for caching. On the target environment, the Redis configuration will be a cluster with sharding – where records are indexed by keys and the key space is distributed over three or more Redis server nodes. This is good for spreading load around when a system has a high volume of transactions. To be able to fully develop for this target environment without affecting it, we need a version of that in the local development environment.

Bitnami’s Redis cluster image was chosen to handle this. We’re not going to get into the details of installing, so feel free to read through Bitnami’s documentation on that procedure for this image.

The Problem

The Redis cluster within the isolated Docker network only advertises itself in relation to that internal network. Although you may have exposed the ports to the host network using the -p 6379:6379 argument, each Redis node only announces itself as its internal IP address or its hostname.  When a Redis cluster client connects and is directed to the node that should contain the information requested, it’s only given a Docker internal IP address or a hostname that is only known to the internal Docker network. As one can imagine, this leads to simply a connection timeout when the client tries to connect to that Redis cluster node and send it a command.

The Solution

Since the client that is running on the host network, external to the Docker network, is unable to connect to a Docker internal IP address, we need a way to get the client to connect to the Docker host interface and proper port for the Redis cluster node to which it needs access.

Two things are needed for this:

  1. Each Redis cluster node needs to be on a unique port, and that same port needs to be exposed to the Docker host network.
    • For each node definition, add the following environment variables:

Replace <0000> with a unique port number for each cluster node.

Replace <unique hostname> with the name given to the container node, i.e., “redis-node-0” through “redis-node-5”.

  1. The host network needs to provide name resolution for each Redis cluster node that points to the Docker host interface’s IP address, which is most likely the loopback interface or
    • In the MacOS world, we can statically manage some DNS entries by updating the /etc/hosts file. Simply add an entry for each Redis cluster node that points to the Docker host interface IP address. In this case, that is  We’ll add the following entries for our six-node cluster:
	redis-node-0	redis-node-1	redis-node-2	redis-node-3	redis-node-4	redis-node-5 
When the client connects to any one of the Redis cluster node ports to access the Redis services, it will be redirected to the appropriate Redis cluster node for that operation. It will be given a unique hostname and port to which it should connect. With the static host entries in place, that hostname will resolve to, and each Redis cluster node has its unique port available on that interface. So the client will be able to successfully connect.

Environment Resources Used

  • Mac OS (M1) 
  • Colima container runtime version HEAD-cf522e8 
  • Docker version 23.0.1 
  • Redis Cluster – bitnami/redis-cluster:7.0 docker image 
  • Client application written in Go version 1.20.1 
  • Go redis package redis/go-redis/v9 

More Posts

Transform your HPC experience, streamline cluster creation and redefine the way you approach demanding computational workloads.

Discover the latest trends, best practices and strategies to safeguard your organization's data while unlocking the full potential of cloud technologies and AI-driven solutions.

Explore the capability of AWS SageMaker by training a NeRF from a regular video and rendering it into a pixel-accurate volumetric representation of the space.