Home > Running Redis Cluster Locally in Docker for Local Testing
In this scenario, we need a Redis cluster running locally to allow local testing and debugging with Redis client connections for an application. However, we want to do so without having to set up a mess of infrastructure in a data center or within a cloud configuration. This solution will allow us to set up a Redis cluster within a Docker container on Mac OS and have it usable for local development.
Doing local development and having access to many of the backend components running locally is a beautiful thing. It means an engineer can have the freedom to create amazing things without having to clear any changes with a change management team, or other team members who may be working on the same system. Developing properly running software never has negative effects on those backends, but in the slight chance it does, the engineer can rest assured that those won’t affect other engineers because they’ve isolated their working environment.
Docker is a tool that allows you to package up software and all its dependencies, such as libraries and settings, into a single package called a “container.” Think of it as a self-contained, lightweight virtual machine that can run on any computer, whether it’s a laptop, server or even in the cloud. It’s great for running database engines like PostgreSQL and MySQL, or running NoSQL databases like MongoDB, or even running the software being developed in its own container to keep it isolated.
Redis is super-fast software that stores and retrieves data quickly, like a turbocharged version of a regular database. It keeps data in memory for lightning-fast access and is great for applications that need speed and efficiency. In this case, Redis is being used as an in-memory data structure store used for caching. On the target environment, the Redis configuration will be a cluster with sharding – where records are indexed by keys and the key space is distributed over three or more Redis server nodes. This is good for spreading load around when a system has a high volume of transactions. To be able to fully develop for this target environment without affecting it, we need a version of that in the local development environment.
Bitnami’s Redis cluster image was chosen to handle this. We’re not going to get into the details of installing, so feel free to read through Bitnami’s documentation on that procedure for this image.
Since the client that is running on the host network, external to the Docker network, is unable to connect to a Docker internal IP address, we need a way to get the client to connect to the Docker host interface and proper port for the Redis cluster node to which it needs access.
Two things are needed for this:
REDIS_PORT_NUMBER=<0000>
REDIS_CLUSTER_PREFERRED_ENDPOINT_TYPE=hostname
REDIS_CLUSTER_ANNOUNCE_HOSTNAME=
Replace <0000> with a unique port number for each cluster node.
Replace <unique hostname> with the name given to the container node, i.e., “redis-node-0” through “redis-node-5”.
127.0.0.1 redis-node-0
127.0.0.1 redis-node-1
127.0.0.1 redis-node-2
127.0.0.1 redis-node-3
127.0.0.1 redis-node-4
127.0.0.1 redis-node-5
Discover the latest trends, best practices and strategies to safeguard your organization's data while unlocking the full potential of cloud technologies and AI-driven solutions.
Explore the capability of AWS SageMaker by training a NeRF from a regular video and rendering it into a pixel-accurate volumetric representation of the space.
High-performance computing (HPC) workloads are demanding and require specialized hardware and software. However, the cloud can provide a cost-effective and scalable solution for HPC.