Cache et stockage clé-valeur
Find a file
Ozan Tezcan 2bc4e0299d
Some checks failed
CI / test-ubuntu-latest (push) Waiting to run
CI / test-sanitizer-address (push) Waiting to run
CI / build-debian-old (push) Waiting to run
CI / build-macos-latest (push) Waiting to run
CI / build-32bit (push) Waiting to run
CI / build-libc-malloc (push) Waiting to run
CI / build-centos-jemalloc (push) Waiting to run
CI / build-old-chain-jemalloc (push) Waiting to run
Codecov / code-coverage (push) Waiting to run
External Server Tests / test-external-standalone (push) Waiting to run
External Server Tests / test-external-cluster (push) Waiting to run
External Server Tests / test-external-nodebug (push) Waiting to run
Spellcheck / Spellcheck (push) Waiting to run
Reply-schemas linter / reply-schemas-linter (push) Has been cancelled
Add Atomic Slot Migration (ASM) support (#14414)
## <a name="overview"></a> Overview 
This PR is a joint effort with @ShooterIT . I’m just opening it on
behalf of both of us.

This PR introduces Atomic Slot Migration (ASM) for Redis Cluster — a new
mechanism for safely and efficiently migrating hash slots between nodes.

Redis Cluster distributes data across nodes using 16384 hash slots, each
owned by a specific node. Sometimes slots need to be moved — for
example, to rebalance after adding or removing nodes, or to mitigate a
hot shard that’s overloaded. Before ASM, slot migration was non-atomic
and client-dependent, relying on CLUSTER SETSLOT, GETKEYSINSLOT, MIGRATE
commands, and client-side handling of ASK/ASKING replies. This process
was complex, error-prone, slow and could leave clusters in inconsistent
states after failures. Clients had to implement redirect logic,
multi-key commands could fail mid-migration, and errors often resulted
in orphaned keys or required manual cleanup. Several related discussions
can be found in the issue list, some examples:
https://github.com/redis/redis/issues/14300 ,
https://github.com/redis/redis/issues/4937 ,
https://github.com/redis/redis/issues/10370 ,
https://github.com/redis/redis/issues/4333 ,
https://github.com/redis/redis/issues/13122,
https://github.com/redis/redis/issues/11312

Atomic Slot Migration (ASM) makes slot rebalancing safe, transparent,
and reliable, addressing many of the limitations of the legacy migration
method. Instead of moving keys one by one, ASM replicates the entire
slot’s data plus live updates to the target node, then performs a single
atomic handoff. Clients keep working without handling ASK/ASKING
replies, multi-key operations remain consistent, failures don’t leave
partial states, and replicas stay in sync. The migration process also
completes significantly faster. Operators gain new commands (CLUSTER
MIGRATION IMPORT, STATUS, CANCEL) for monitoring and control, while
modules can hook into migration events for deeper integration.

### The problems of legacy method in detail

Operators and developers ran into multiple issues with the legacy
method, some of these issues in detail:

1. **Redirects and Client Complexity:** While a slot was being migrated,
some keys were already moved while others were not. Clients had to
handle `-ASK` and `-ASKING` responses, reissuing requests to the target
node. Not all client libraries implemented this correctly, leading to
failed commands or subtle bugs. Even when implemented, it increased
latency and broke naive pipelines.
2. **Multi-Key Operations Became Unreliable:** Commands like `MGET key1
key2` could fail with `TRYAGAIN` if part of the slot was already
migrated. This made application logic unpredictable during resharding.
3. **Risk of failure:** Keys were moved one-by-one (with MIGRATE
command). If the source crashed, or the destination ran out of memory,
the system could be left in an inconsistent state: some keys moved,
others lost, slots partially migrated. Manual intervention was often
needed, sometimes resulting in data loss.
4. **Replica and Failover Issues:** Replicas weren’t aware of migrations
in progress. If a failover occurred mid-migration, manual intervention
was required to clean up or resume the process safely.
5. **Operational Overhead:** Operators had to coordinate multiple
commands (CLUSTER SETSLOT, MIGRATE, GETKEYSINSLOT, etc.) with little
visibility into progress or errors, making rebalancing slow and
error-prone.
6. **Poor performance:** Key-by-key migration was inherently slow and
inefficient for large slot ranges.
7. **Large keys:** Large keys could fail to migrate or cause latency
spikes on the destination node.

### How Atomic Slot Migration Fixes This

Atomic Slot Migration (ASM) eliminates all of these issues by:

1. **Clients:** Clients no longer need to handle ASK/ASKING; the
migration is fully transparent.
2. **Atomic ownership transfer:** The entire slot’s data (snapshot +
live updates) is replicated and handed off in a single atomic step.
3. **Performance**: ASM completes migrations significantly faster by
streaming slot data in parallel (snapshot + incremental updates) and
eliminating key-by-key operations.
4. **Consistency guarantees:** Multi-key operations and pipelines
continue to work reliably throughout migration.
5. **Resilience:** Failures no longer leave orphaned keys or partial
states; migration tasks can be retried or safely cancelled.
6. **Replica awareness:** Replicas remain consistent during migration,
and failovers will no longer leave partially imported keys.
7. **Operator visibility:** New CLUSTER MIGRATION subcommands (IMPORT,
STATUS, CANCEL) provide clear observability and management for
operators.


### ASM Diagram and Migration Steps

```
      ┌─────────────┐               ┌────────────┐     ┌───────────┐      ┌───────────┐ ┌───────┐        
      │             │               │Destination │     │Destination│      │ Source    │ │Source │        
      │  Operator   │               │   master   │     │ replica   │      │ master    │ │ Fork  │        
      │             │               │            │     │           │      │           │ │       │        
      └──────┬──────┘               └─────┬──────┘     └─────┬─────┘      └─────┬─────┘ └───┬───┘        
             │                            │                  │                  │           │            
             │                            │                  │                  │           │            
             │CLUSTER MIGRATION IMPORT    │                  │                  │           │            
             │   <start-slot> <end-slot>..│                  │                  │           │            
             ├───────────────────────────►│                  │                  │           │            
             │                            │                  │                  │           │            
             │   Reply with <task-id>     │                  │                  │           │            
             │◄───────────────────────────┤                  │                  │           │            
             │                            │                  │                  │           │            
             │                            │                  │                  │           │            
             │                            │ CLUSTER SYNCSLOTS│SYNC              │           │            
             │ CLUSTER MIGRATION STATUS   │   <task-id> <start-slot> <end-slot>.│           │            
Monitor      │   ID <task-id>             ├────────────────────────────────────►│           │            
task      ┌─►├───────────────────────────►│                  │                  │           │            
state     │  │                            │                  │                  │           │            
till      │  │      Reply status          │  Negotiation with multiple channels │           │            
completed └─ │◄───────────────────────────┤      (i.e rdbchannel repl)          │           │            
             │                            │◄───────────────────────────────────►│           │            
             │                            │                  │                  │  Fork     │            
             │                            │                  │                  ├──────────►│ ─┐         
                                          │                  │                  │           │  │         
                                          │   Slot snapshot as RESTORE commands │           │  │         
                                          │◄────────────────────────────────────────────────┤  │         
                                          │   Propagate      │                  │           │  │         
      ┌─────────────┐                     ├─────────────────►│                  │           │  │         
      │             │                     │                  │                  │           │  │ Snapshot
      │   Client    │                     │                  │                  │           │  │ delivery
      │             │                     │   Replication stream for slot range │           │  │ duration
      └──────┬──────┘                     │◄────────────────────────────────────┤           │  │         
             │                            │   Propagate      │                  │           │  │         
             │                            ├─────────────────►│                  │           │  │         
             │                            │                  │                  │           │  │         
             │    SET key value1          │                  │                  │           │  │         
             ├─────────────────────────────────────────────────────────────────►│           │  │         
             │         +OK                │                  │                  │           │ ─┘         
             │◄─────────────────────────────────────────────────────────────────┤           │            
             │                            │                  │                  │           │            
             │                            │    Drain repl stream                │ ──┐       │            
             │                            │◄────────────────────────────────────┤   │       │            
             │    SET key value2          │                  │                  │   │       │            
             ├─────────────────────────────────────────────────────────────────►│   │Write  │            
             │                            │                  │                  │   │pause  │            
             │                            │                  │                  │   │       │            
             │                            │  Publish new config via cluster bus │   │       │            
             │       +MOVED               ├────────────────────────────────────►│ ──┘       │            
             │◄─────────────────────────────────────────────────────────────────┤ ──┐       │            
             │                            │                  │                  │   │       │            
             │                            │                  │                  │   │Trim   │            
             │                            │                  │                  │ ──┘       │            
             │     SET key value2         │                  │                  │           │            
             ├───────────────────────────►│                  │                  │           │            
             │         +OK                │                  │                  │           │            
             │◄───────────────────────────┤                  │                  │           │            
             │                            │                  │                  │           │            
             │                            │                  │                  │           │            
 ```

### New commands introduced

There are two new commands: 
1. A command to start, monitor and cancel the migration operation:  `CLUSTER MIGRATION <arg>`
2. An internal command to manage slot transfer between source and destination:  `CLUSTER SYNCSLOTS <arg>` For more details, please refer to the [New Commands](#new-commands) section. Internal command messaging is mostly omitted in the diagram for simplicity.


### Steps
1. Slot migration begins when the operator sends `CLUSTER MIGRATION IMPORT <start-slot> <end-slot> ...`
to the destination master. The process is initiated from the destination node, similar to REPLICAOF. This approach allows us to reuse the same logic and share code with the new replication mechanism (see https://github.com/redis/redis/pull/13732). The command can include multiple slot ranges. The destination node creates one migration task per source node, regardless of how many slot ranges are specified. Upon successfully creating the task, the destination node replies IMPORT command with the assigned task ID. The operator can then monitor progress using `CLUSTER MIGRATION STATUS ID <task-id>` . When the task’s state field changes to `completed`, the migration has finished successfully. Please see [New Commands](#new-commands) section for the output sample. 
2. After creating the migration task,  the destination node will request replication of slots by using the internal command `CLUSTER SYNCSLOTS`.
3. Once the source node accepts the request, the destination node establishes another separate connection(similar to rdbchannel replication) so snapshot data and incremental changes can be transmitted in parallel.
4. Source node forks, starts delivering snapshot content (as per-key RESTORE commands) from one connection and incremental changes from the other connection. The destination master starts applying commands from the snapshot connection and accumulates incremental changes. Applied commands are also propagated to the destination replicas via replication backlog.

    Note: Only commands of related slots are delivered to the destination node. This is done by writing them to the migration client’s output buffer, which serves as the replication stream for the migration operation.
5. Once the source node finishes delivering the snapshot and determines that the destination node has caught up (remaining repl stream to consume went under a configured limit), it pauses write traffic for the entire server. After pausing the writes, the source node forwards any remaining write commands to the destination node.

6. Once the destination consumes all the writes, it bumps up cluster config epoch and changes the configuration. New config is published via cluster bus.
7. When the source node receives the new configuration, it can redirect clients and it begins trimming the migrated slots, while also resuming write traffic on the server.

### Internal slots synchronization state machine
![asm state machine](https://github.com/user-attachments/assets/b7db353c-969e-4bde-b77f-c6abe5aa13d3)

1. The destination node performs authentication using the cluster secret introduced in #13763 , and transmits its node ID information.
2. The destination node sends `CLUSTER SYNCSLOTS SYNC <task-id> <start-slot> <end-slot>` to initiate a slot synchronization request and establish the main channel. The source node responds with `+RDBCHANNELSYNCSLOTS`, indicating that the destination node should establish an RDB channel.
3. The destination node then sends `CLUSTER SYNCSLOTS RDBCHANNEL <task-id>` to establish the RDB channel, using the same task-id as in the previous step to associate the two connections as part of the same ASM task.
The source node replies with `+SLOTSSNAPSHOT`, and `fork` a child process to transfer slot snapshot.
4. The destination node applies the slot snapshot data received over the RDB channel, while proxying the command stream to replicas. At the same time, the main channel continues to read and buffer incremental commands in memory.
5. Once the source node finishes sending the slot snapshot, it notifies the destination node using the `CLUSTER SYNCSLOTS SNAPSHOT-EOF` command. The destination node then starts streaming the buffered commands while continuing to read and buffer incremental commands sent from the source.
6. The destination node periodically sends `CLUSTER SYNCSLOTS ACK <offset>` to inform the source of the applied data offset. When the offset gap meets the threshold, the source node pauses write operations. After all buffered data has been drained, it sends `CLUSTER SYNCSLOTS STREAM-EOF` to the destination node to hand off slots.
7. Finally, the destination node takes over slot ownership, updates the slot configuration and bumps the epoch, then broadcasts the updates via cluster bus. Once the source node detects the updated slot configuration, the slot migration process is complete. 

### Error handling
- If the connection between the source and destination is lost (due to disconnection, output buffer overflow, OOM, or timeout), the destination node automatically restarts the migration from the beginning. The destination node will retry the operation until it is explicitly cancelled using the CLUSTER MIGRATION CANCEL <task-id> command.
- If a replica connection drops during migration, it can later resume with PSYNC, since the imported slot data is also written to the replication backlog.
- During the write pause phase, the source node sets a timeout. If the destination node fails to drain remaining replication data and update the config during that time, the source node assumes the destination has failed and automatically resumes normal writes for the migrating slots.
- On any error, the destination node triggers a trim operation to discard any partially imported slot data.
- If node crashes during importing, unowned keys are deleted on start up. 


### <a name="slot-snapshot-format-considerations"></a> Slot Snapshot Format Considerations 

When the source node forks to deliver slot content, in theory, there are several possible formats for transmitting the snapshot data:

**Mini RDB**:A compact RDB file containing only the keys from the migrating slots. This format is efficient for transmission, but it cannot be easily forwarded to destination-side replicas.
**AOF format**: The source node can generate commands in AOF form (e.g., SET x y, HSET h f v) and stream them. Individual commands are easily appended to the replication stream and propagated to replicas. Large keys can also be split into multiple commands (incrementally reconstructing the value), similar to the AOF rewrite process.
**RESTORE commands**: Each key is serialized and sent as a `RESTORE` command. These can be appended directly to the destination’s replication stream, though very large keys may make serialization and transmission less efficient.

We chose the `RESTORE` command as default approach for the following reasons:
- It can be easily propagated to replicas.
- It is more efficient than AOF for most cases, and some module keys do not support the AOF format.
- For large **non-module** keys that are not string, ASM automatically switches to the AOF-based key encoding as an optimization when the key’s cardinality exceeds 512. This approach allows the key to be transferred in chunks rather than as a single large payload, reducing memory pressure and improving migration efficiency. In future versions, the RESTORE command may be enhanced to handle large keys more efficiently.

Some details:
- For RESTORE commands, normally by default Redis compresses keys. We disable compression while delivering RESTORE commands as compression comes with a performance hit. Without compression, replication is several times faster. 
- For string keys, we still prefer AOF format, e.g. SET commands as it is currently more efficient than RESTORE, especially for big keys.

### <a name="trimming-the-keys"></a> Trimming the keys 

When a migration completes successfully, the source node deletes the migrated keys from its local database.
Since the migrated slots may contain a large number of keys, this trimming process must be efficient and non-blocking.

In cluster mode, Redis maintains per-slot data structures for keys, expires, and subexpires. This organization makes it possible to efficiently detach all data associated with a given slot in a single step. During trimming, these slot-specific data structures are handed off to a background I/O (BIO) thread for asynchronous cleanup—similar to how FLUSHALL or FLUSHDB operate. This mechanism is referred to as background trimming, and it is the preferred and default method for ASM, ensuring that the main thread remains unblocked.

However, unlike Redis itself, some modules may not maintain per-slot data structures and therefore cannot drop related slots data in a single operation. To support these cases, Redis introduces active trimming, where key deletion occurs in the main thread instead. This is not a blocking operation, trimming runs concurrently in the main thread, periodically removing keys during the cron loop. Each deletion triggers a keyspace notification so that modules can react to individual key removals. While active trim is less efficient, it ensures backward compatibility for modules during the transition period.

Before starting the trim, Redis checks whether any module is subscribed to newly added  `REDISMODULE_NOTIFY_KEY_TRIMMED` keyspace event. If such subscribers exist, active trimming is used; otherwise, background trimming is triggered. Going forward, modules are expected to adopt background trimming to take advantage of its performance and scalability benefits, and active trimming will be phased out once modules migrate to the new model.

Redis also prefers active trimming if there is any client that is using client tracking feature (see [client-side caching](https://redis.io/docs/latest/develop/reference/client-side-caching/)). In the current client tracking protocol, when a database is flushed (e.g., via the FLUSHDB command), a null value is sent to tracking clients to indicate that they should invalidate all locally cached keys. However, there is currently no mechanism to signal that only specific slots have been flushed. Iterating over all keys in the slots to be trimmed would be a blocking operation. To avoid this, if there is any client that is using client tracking feature, Redis automatically switches to active trimming mode. In the future, the client tracking protocol can be extended to support slot-based invalidation, allowing background trimming to be used in these cases as well.

Finally, trimming may also be triggered after a migration failure. In such cases, the operation ensures that any partially imported or inconsistent slot data is cleaned up, maintaining cluster consistency and preventing stale keys from remaining in the source or destination nodes.

Note about active trim: Subsequent migrations can complete while a prior trim is still running. In that case, the new migration’s trim job is queued and will start automatically after the current trim finishes. This does not affect slot ownership or client traffic—it only serializes the background cleanup.

### <a name="replica-handling"></a> Replica handling 

- During importing, new keys are propagated to destination side replica. Replica will check slot ownership before replying commands like SCAN, KEYS, DBSIZE not to include these unowned keys in the reply. 

  Also, when an import operation begins, the master now propagates an internal command through the replication stream, allowing replicas to recognize that an ASM operation is in progress. This is done by the internal `CLUSTER SYNCSLOTS CONF ASM-TASK` command in the replication stream. This enables replicas to trigger the relevant module events so that modules can adapt their behavior — for example, filtering out unowned keys from read-only requests during ASM operations. To be able to support full sync with RDB delivery scenarios, a new AUX field is also added to the RDB: `cluster-asm-task`. It's value is a string in the format of `task_id:source_node:dest_node:operation:state:slot_ranges`. 

- After a successful migration or on a failed import, master will trim the keys. In that case, master will propagate a new command to the replica: `TRIMSLOTS RANGES <numranges> <start-slot> <end-slot> ... ` . So, the replica will start trimming once this command is received. 

### <a name="propagating-data-outside-the-keyspace"></a> Propagating data outside the keyspace

When the destination node is newly added to the cluster, certain data outside the keyspace may need to be propagated first.
A common example is functions. Previously, redis-cli handled this by transferring functions when a new node was added.
With ASM, Redis now automatically dumps and sends functions to the destination node using `FUNCTION RESTORE ..REPLACE` command — done purely for convenience to simplify setup.

Additionally, modules may also need to propagate their own data outside the keyspace.
To support this, a new API has been introduced: `RM_ClusterPropagateForSlotMigration()`.
See the [Module Support](#module-support) section for implementation details.

### Limitations

1. Single migration at a time: Only one ASM migration operation is allowed at a time. This limitation simplifies the current design but can be extended in the future.

2. Large key handling: For large keys, ASM switches to AOF encoding to deliver key data in chunks. This mechanism currently applies only to non-module keys. In the future, the RESTORE command may be extended to support chunked delivery, providing a unified solution for all key types. See [Slot Snapshot Format Considerations](#slot-snapshot-format-considerations) for details.

3. There are several cases that may cause an Atomic Slot Migration (ASM) to be aborted (can be retried afterwards):
    - FLUSHALL / FLUSHDB: These commands introduce complexity during ASM. For example, if executed on the migrating node, they must be propagated only for the migrating slots. However, when combined with active trimming, their execution may need to be deferred until it is safe to proceed, adding further complexity to the process.
    - FAILOVER: The replica cannot resume the migration process. Migration should start from the beginning.
    - Module propagates cross-slot command during ASM via RM_Replicate(): If this occurs on the migrating node, Redis cannot split the command to propagate only the relevant slots to the ASM destination. To keep the logic simple and consistent, ASM is cancelled in this case. Modules should avoid propagating cross-slot commands during migration.
    - CLIENT PAUSE: The import task cannot progress during a write pause, as doing so would violate the guarantee that no writes occur during migration. To keep things simple, the ASM task is aborted when CLIENT PAUSE is active.
    - Manual Slot Configuration Changes: If slot configuration is modified manually during ASM (for example, when legacy migration methods are mixed with ASM), the process is aborted. Note: This situation is highly unexpected — users should not combine ASM with legacy migration methods.
    
4. When active trimming is enabled, a node must not re-import the same slots while trimming for those slots is still in progress. Otherwise, it can’t distinguish newly imported keys from pre-existing ones, and the trim cron might delete the incoming keys by mistake. In this state, the node rejects IMPORT operation for those slots until trimming completes. If the master has finished trimming but a replica is still trimming, master may still start the import operation for those slots. So, the replica checks whether the master is sending commands for those slots; if so, it blocks the master’s client connection until trimming finishes. This is a corner case, but we believe the behavior is reasonable for now. In the worst case, the master may drop the replica (e.g., buffer overrun), triggering a new full sync.

# API Changes

## <a name="new-commands"></a> New Commands 

### Public commands
1. **Syntax:**  `CLUSTER MIGRATION IMPORT <start-slot> <end-slot> [<start-slot> <end-slot>]...`
  **Args:** Slot ranges
  **Reply:** 
    - String task ID
    - -ERR <message> on failure (e.g. invalid slot range) 

    **Description:** Executes on the destination master. Accepts multiple slot ranges and triggers atomic migration for the specified ranges. Returns a task ID that can be used to monitor the status of the task. In CLUSTER MIGRATION STATUS output, “state” field will be `completed` on a successful operation.

2. **Syntax:**  `CLUSTER MIGRATION CANCEL [ID <id> | ALL]`
  **Args:** Task ID or ALL
  **Reply:** Number of cancelled tasks

    **Description:** Cancels an ongoing migration task by its ID or cancels all tasks if ALL is specified. Note: Cancelling a task on the source node does not stop the migration on the destination node, which will continue retrying until it is also cancelled there.


3. **Syntax:**  `CLUSTER MIGRATION STATUS [ID <id> | ALL]`
  **Args:** Task ID or ALL
    - **ID:** If provided, returns the status of the specified migration task.
    - **ALL:** Lists the status of all migration tasks.

    **Reply:**
      - A list of migration task details (both ongoing and completed ones).
      - Empty list if the given task ID does not exist.

    **Description:** Displays the status of all current and completed atomic slot migration tasks. If a specific task ID is provided, it returns detailed information for that task only.
    
    **Sample output:**
```
127.0.0.1:5001> cluster migration status all
1)  1) "id"
    2) "24cf41718b20f7f05901743dffc40bc9b15db339"
    3) "slots"
    4) "0-1000"
    5) "source"
    6) "1098d90d9ba2d1f12965442daf501ef0b6667bec"
    7) "dest"
    8) "b3b5b426e7ea6166d1548b2a26e1d5adeb1213ac"
    9) "operation"
   10) "migrate"
   11) "state"
   12) "completed"
   13) "last_error"
   14) ""
   15) "retries"
   16) "0"
   17) "create_time"
   18) "1759694528449"
   19) "start_time"
   20) "1759694528449"
   21) "end_time"
   22) "1759694528464"
   23) "write_pause_ms"
   24) "10"
```

### Internal commands

1. **Syntax:**  `CLUSTER SYNCSLOTS <arg> ...`
  **Args:** Internal messaging operations
  **Reply:**  +OK or -ERR <message> on failure (e.g. invalid slot range) 

    **Description:** Used for internal communication between source and destination nodes. e.g. handshaking, establishing multiple channels, triggering handoff.
    
2. **Syntax:**  `TRIMSLOTS RANGES <numranges> <start-slot> <end-slot> ...`
  **Args:** Slot ranges to trim
  **Reply:**  +OK 

    **Description:** Master propagates it to replica so that replica can trim unowned keys after a successful migration or on a failed import. 

## New configs

- `cluster-slot-migration-max-archived-tasks`: To list in `CLUSTER MIGRATION STATUS ALL` output, Redis keeps last n migration tasks in memory. This config controls maximum number of archived ASM tasks. Default value: 32, used as a hidden config
- `cluster-slot-migration-handoff-max-lag-bytes`: After the slot snapshot is completed, if the remaining replication stream size falls below this threshold, the source node pauses writes to hand off slot ownership. A higher value may trigger the handoff earlier but can lead to a longer write pause, since more data remains to be replicated. A lower value can result in a shorter write pause, but it may be harder to reach the threshold if there is a steady flow of incoming writes. Default value: 1MB
- `cluster-slot-migration-write-pause-timeout`: The maximum duration (in milliseconds) that the source node pauses writes during ASM handoff. After pausing writes, if the destination node fails to take over the slots within this timeout (for example, due to a cluster configuration update failure), the source node assumes the migration has failed and resumes writes to prevent indefinite blocking. Default value: 10 seconds
- `cluster-slot-migration-sync-buffer-drain-timeout`: Timeout in milliseconds for sync buffer to be drained during ASM. 
After the destination applies the accumulated buffer, the source continues sending commands for migrating slots. The destination keeps applying them, but if the gap remains above the acceptable limit (see `slot-migration-handoff-max-lag-bytes`), which may cause endless synchronization. A timeout check is required to handle this case.
The timeout is calculated as **the maximum of two values**:
   - A configurable timeout (slot-migration-sync-buffer-drain-timeout) to avoid false positives.
   - A dynamic timeout based on the time that the destination took to apply the slot snapshot and the accumulated buffer during slot snapshot delivery. The destination should be able to drain the remaining sync buffer in less time than this. We multiply it by 2 to be more conservative. 

    Default value: 60000 millliseconds, used as a hidden config

## New flag in CLIENT LIST
- the client responsible for importing slots is marked with the `o` flag.
- the client responsible for migrating slots is marked with the `g` flag.

## New INFO fields

- `mem_cluster_slot_migration_output_buffer`: Memory usage of the migration client’s output buffer. Redis writes incoming changes to this buffer during the migration process.
- `mem_cluster_slot_migration_input_buffer`: Memory usage of the accumulated replication stream buffer on the importing node.
- `mem_cluster_slot_migration_input_buffer_peak`: Peak accumulated repl buffer size on the importing side

## New CLUSTER INFO fields

- `cluster_slot_migration_active_tasks`: Number of in-progress ASM tasks. Currently, it will be 1 or 0. 
- `cluster_slot_migration_active_trim_running`: Number of active trim jobs in progress and scheduled
- `cluster_slot_migration_active_trim_current_job_keys`: Number of keys scheduled for deletion in the current trim job.
- `cluster_slot_migration_active_trim_current_job_trimmed`: Number of keys already deleted in the current trim job.
- `cluster_slot_migration_stats_active_trim_started`: Total number of trim jobs that have started since the process began.
- `cluster_slot_migration_stats_active_trim_completed`: Total number of trim jobs completed since the process began.
- `cluster_slot_migration_stats_active_trim_cancelled`: Total number of trim jobs cancelled since the process began.


## Changes in RDB format

A new aux field is added to RDB: `cluster-asm-task`. When an import operation begins, the master now propagates an internal command through the replication stream, allowing replicas to recognize that an ASM operation is in progress. This enables replicas to trigger the relevant module events so that modules can adapt their behavior — for example, filtering out unowned keys from read-only requests during ASM operations. To be able to support RDB delivery scenarios, a new field is added to the RDB. See [replica handling](#replica-handling)

## Bug fix
- Fix memory leak when processing forgetting node type message
- Fix data race of writing reply to replica client directly when enabling multi-threading
We don't plan to back point them into old versions, since they are very rare cases.

## Keys visibility
When performing atomic slot migration, during key importing on the destination node or key trimming on the source/destination, these keys will be filtered out in the following commands:
- KEYS
- SCAN
- RANDOMKEY
- CLUSTER GETKEYSINSLOT
- DBSIZE
- CLUSTER COUNTKEYSINSLOT

The only command that will reflect the increasing number of keys is:
- INFO KEYSPACE

## <a name="module-support"></a> Module Support 

**NOTE:** Please read [trimming](#trimming-the-keys) section and see how does ASM decide about trimming method when there are modules in use. 

### New notification:
```c
#define REDISMODULE_NOTIFY_KEY_TRIMMED (1<<17) 
```
When a key is deleted by the active trim operation, this notification will be sent to subscribed modules.
Also, ASM will automatically choose the trimming method depending on whether there are any subscribers to this new event. Please see the further details here: [trimming](#trimming-the-keys)


### New struct in the API:
```c
typedef struct RedisModuleSlotRange {
    uint16_t start;
    uint16_t end;
} RedisModuleSlotRange;

typedef struct RedisModuleSlotRangeArray {
    int32_t num_ranges;
    RedisModuleSlotRange ranges[];
} RedisModuleSlotRangeArray;
```

### New Events
#### 1. REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION (RedisModuleEvent_ClusterSlotMigration)

These events notify modules about different stages of Active Slot Migration (ASM) operations such as when import or migration starts, fails, or completes. Modules can use these notifications to track cluster slot movements or perform custom logic during ASM transitions.

```c
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_STARTED 0
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_FAILED 1
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_COMPLETED 2
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_STARTED 3
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_FAILED 4
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_COMPLETED 5
#define
REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE 6
```

Parameter to these events:
```c
typedef struct RedisModuleClusterSlotMigrationInfo {
uint64_t version; /* Not used since this structure is never passed
from the module to the core right now. Here
                               for future compatibility. */
    char source_node_id[REDISMODULE_NODE_ID_LEN + 1];
    char destination_node_id[REDISMODULE_NODE_ID_LEN + 1];
    const char *task_id;
    RedisModuleSlotRangeArray* slots;
} RedisModuleClusterSlotMigrationInfoV1;

#define RedisModuleClusterSlotMigrationInfo
RedisModuleClusterSlotMigrationInfoV1
```


#### 2. REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM (RedisModuleEvent_ClusterSlotMigrationTrim)

These events inform modules about the lifecycle of ASM key trimming operations. Modules can use them to detect when trimming starts, completes, or is performed asynchronously in the background.

```c
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_STARTED     0
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_COMPLETED   1
#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_BACKGROUND  2
```

Parameter to these events:
```c
typedef struct RedisModuleClusterSlotMigrationTrimInfo {
uint64_t version; /* Not used since this structure is never passed
from the module to the core right now. Here
                               for future compatibility. */
    RedisModuleSlotRangeArray* slots;
} RedisModuleClusterSlotMigrationTrimInfoV1;

#define RedisModuleClusterSlotMigrationTrimInfo
RedisModuleClusterSlotMigrationTrimInfoV1
```

### New functions

```c
/* Returns 1 if keys in the specified slot can be accessed by this node,
0 otherwise.
 *
 * This function returns 1 in the following cases:
* - The slot is owned by this node or by its master if this node is a
replica
* - The slot is being imported under the old slot migration approach
(CLUSTER SETSLOT <slot> IMPORTING ..)
 * - Not in cluster mode (all slots are accessible)
 *
 * Returns 0 for:
 * - Invalid slot numbers (< 0 or >= 16384)
 * - Slots owned by other nodes
 */
int RM_ClusterCanAccessKeysInSlot(int slot);

/* Propagate commands along with slot migration.
 *
 * This function allows modules to add commands that will be sent to the
* destination node before the actual slot migration begins. It should
only be
* called during the
REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE
event.
 *
 * This function can be called multiple times within the same event to
 * replicate multiple commands. All commands will be sent before the
 * actual slot data migration begins.
 *
* Note: This function is only available in the fork child process just
before
 *       slot snapshot delivery begins.
 *
 * On success REDISMODULE_OK is returned, otherwise
 * REDISMODULE_ERR is returned and errno is set to the following values:
 *
 * * EINVAL: function arguments or format specifiers are invalid.
* * EBADF: not called in the correct context, e.g. not called in the
REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE
event.
 * * ENOENT: command does not exist.
 * * ENOTSUP: command is cross-slot.
* * ERANGE: command contains keys that are not within the migrating slot
range.
 */
int RM_ClusterPropagateForSlotMigration(RedisModuleCtx *ctx,
                                        const char *cmdname,
                                        const char *fmt, ...);

/* Returns the locally owned slot ranges for the node.
 *
 * An optional `ctx` can be provided to enable auto-memory management.
* If cluster mode is disabled, the array will include all slots
(0–16383).
 * If the node is a replica, the slot ranges of its master are returned.
 *
 * The returned array must be freed with RM_ClusterFreeSlotRanges().
 */                                
RedisModuleSlotRangeArray *RM_ClusterGetLocalSlotRanges(RedisModuleCtx
*ctx);

/* Frees a slot range array returned by RM_ClusterGetLocalSlotRanges().
* Pass the `ctx` pointer only if the array was created with a context.
*/
void RM_ClusterFreeSlotRanges(RedisModuleCtx *ctx,
RedisModuleSlotRangeArray *slots);
```

## ASM API for alternative cluster implementations

Following https://github.com/redis/redis/pull/12742, Redis cluster code was restructured to support alternative cluster implementations. Redis uses cluster_legacy.c implementation by default. This PR adds a generic ASM API so alternative implementations can initiate and coordinate Atomic Slot Migration (ASM) while Redis executes the data movement and emits state changes.

Documentation rests in `cluster.h`:

```c
There are two new functions:

/* Called by cluster implementation to request an ASM operation.
(cluster impl --> redis) */
int clusterAsmProcess(const char *task_id, int event, void *arg, char
**err);

/* Called when an ASM event occurs to notify the cluster implementation.
(redis --> cluster impl) */
int clusterAsmOnEvent(const char *task_id, int event, void *arg);
```

```c
/* API for alternative cluster implementations to start and coordinate
 * Atomic Slot Migration (ASM).
 *
* These two functions drive ASM for alternative cluster implementations.
* - clusterAsmProcess(...) impl -> redis: initiates/advances/cancels ASM
operations
 * - clusterAsmOnEvent(...) redis -> impl: notifies state changes
 *
 * Generic steps for an alternative implementation:
* - On destination side, implementation calls
clusterAsmProcess(ASM_EVENT_IMPORT_START)
 *   to start an import operation.
 * - Redis calls clusterAsmOnEvent() when an ASM event occurs.
* - On the source side, Redis will call
clusterAsmOnEvent(ASM_EVENT_HANDOFF_PREP)
* when slots are ready to be handed off and the write pause is needed.
* - Implementation stops the traffic to the slots and calls
clusterAsmProcess(ASM_EVENT_HANDOFF)
* - On the destination side, Redis calls
clusterAsmOnEvent(ASM_EVENT_TAKEOVER)
* when destination node is ready to take over the slot, waiting for
ownership change.
* - Cluster implementation updates the config and calls
clusterAsmProcess(ASM_EVENT_DONE)
 *   to notify Redis that the slots ownership has changed.
 *
 * Sequence diagram for import:
* - Note: shows only the events that cluster implementation needs to
react.
 *
* ┌───────────────┐ ┌───────────────┐ ┌───────────────┐
┌───────────────┐
* │ Destination │ │ Destination │ │ Source │ │ Source │
* │ Cluster impl │ │ Master │ │ Master │ │ Cluster impl │
* └───────┬───────┘ └───────┬───────┘ └───────┬───────┘
└───────┬───────┘
* │ │ │ │
* │ ASM_EVENT_IMPORT_START │ │ │
* ├─────────────────────────────►│ │ │
* │ │ CLUSTER SYNCSLOTS <arg> │ │
* │ ├────────────────────────►│ │
* │ │ │ │
* │ │ SNAPSHOT(restore cmds) │ │
* │ │◄────────────────────────┤ │
* │ │ Repl stream │ │
* │ │◄────────────────────────┤ │
* │ │ │ ASM_EVENT_HANDOFF_PREP │
* │ │ ├────────────────────────────►│
* │ │ │ ASM_EVENT_HANDOFF │
* │ │ │◄────────────────────────────┤
* │ │ Drain repl stream │ │
* │ │◄────────────────────────┤ │
* │ ASM_EVENT_TAKEOVER │ │ │
* │◄─────────────────────────────┤ │ │
* │ │ │ │
* │ ASM_EVENT_DONE │ │ │
* ├─────────────────────────────►│ │ ASM_EVENT_DONE │
* │ │ │◄────────────────────────────┤
* │ │ │ │
 */

#define ASM_EVENT_IMPORT_START 1 /* Start a new import operation
(destination side) */
#define ASM_EVENT_CANCEL 2 /* Cancel an ongoing import/migrate operation
(source and destination side) */
#define ASM_EVENT_HANDOFF_PREP 3 /* Slot is ready to be handed off to
the destination shard (source side) */
#define ASM_EVENT_HANDOFF 4 /* Notify that the slot can be handed off
(source side) */
#define ASM_EVENT_TAKEOVER 5 /* Ready to take over the slot, waiting for
config change (destination side) */
#define ASM_EVENT_DONE 6 /* Notify that import/migrate is completed,
config is updated (source and destination side) */

#define ASM_EVENT_IMPORT_PREP 7 /* Import is about to start, the
implementation may reject by returning C_ERR */
#define ASM_EVENT_IMPORT_STARTED    8  /* Import started */
#define ASM_EVENT_IMPORT_FAILED     9  /* Import failed */
#define ASM_EVENT_IMPORT_COMPLETED 10 /* Import completed (config
updated) */
#define ASM_EVENT_MIGRATE_PREP 11 /* Migrate is about to start, the
implementation may reject by returning C_ERR */
#define ASM_EVENT_MIGRATE_STARTED   12 /* Migrate started */
#define ASM_EVENT_MIGRATE_FAILED    13 /* Migrate failed */
#define ASM_EVENT_MIGRATE_COMPLETED 14 /* Migrate completed (config
updated) */
```

------
Co-authored-by: Yuan Wang <yuan.wang@redis.com>

---------

Co-authored-by: Yuan Wang <yuan.wang@redis.com>
2025-10-22 15:56:20 +03:00
.codespell Bump codespell from 2.2.4 to 2.2.5 (#12557) 2023-09-08 16:10:17 +03:00
.github Introduce DEBUG_DEFRAG compilation option to allow run test with activedefrag when allocator is not jemalloc (#14326) 2025-09-10 12:52:20 +08:00
deps Optimistic locking for string objects - compare-and-set and compare-and-delete (#14435) 2025-10-21 10:32:49 +03:00
modules [vector sets] VRANGE implementation (#14235) 2025-10-09 10:14:14 +08:00
src Add Atomic Slot Migration (ASM) support (#14414) 2025-10-22 15:56:20 +03:00
tests Add Atomic Slot Migration (ASM) support (#14414) 2025-10-22 15:56:20 +03:00
utils Add VRANGE to ignored commands for schemas validator (#14426) 2025-10-10 19:27:07 +08:00
.gitattributes Fix commands.c build issue on merge (#10172) 2022-01-25 12:24:06 +02:00
.gitignore Using fast_float library for faster parsing of 64 decimal strings. (#11884) 2024-09-15 21:37:29 +08:00
00-RELEASENOTES fix dead link to stable release download (#11625) 2022-12-16 20:02:08 +02:00
BUGS change references to the github repo location (#7479) 2020-07-10 08:25:26 +03:00
CODE_OF_CONDUCT.md Update supported version list. (#12488) 2023-08-16 08:36:40 +03:00
codecov.yml Add codecov for automated code coverage (#13393) 2025-01-27 21:04:11 +08:00
CONTRIBUTING.md Adding AGPLv3 as a license option to Redis! (#13997) 2025-05-01 14:04:22 +01:00
INSTALL INSTALL now redirects the user to README 2012-02-05 09:38:41 +01:00
LICENSE.txt LICENSE.txt wrongly included the text of GPLv3 instead of AGPLv3 (#14010) 2025-05-06 14:45:36 +03:00
Makefile Add the option to build Redis with modules (#13524) 2024-09-09 15:47:02 +03:00
MANIFESTO MANIFESTO: simplicity and lock-in. 2019-03-18 15:49:52 +01:00
README.md Document Redis Query Engine and Intel SVS-VAMANA usage (#14374) 2025-10-16 14:43:49 +03:00
redis-full.conf Add vector-sets module 2025-04-02 15:06:24 +00:00
redis.conf Add Atomic Slot Migration (ASM) support (#14414) 2025-10-22 15:56:20 +03:00
REDISCONTRIBUTIONS.txt Adding AGPLv3 as a license option to Redis! (#13997) 2025-05-01 14:04:22 +01:00
runtest Support tclsh 8.7 (#9500) 2021-09-15 13:04:31 +03:00
runtest-cluster Support tclsh 8.7 (#9500) 2021-09-15 13:04:31 +03:00
runtest-moduleapi Module set/get config API (#14051) 2025-07-03 13:46:33 +03:00
runtest-sentinel Support tclsh 8.7 (#9500) 2021-09-15 13:04:31 +03:00
SECURITY.md Update SECURITY.md (#14355) 2025-10-15 22:20:41 +03:00
sentinel.conf Update old links for modules-api-ref.md (#13479) 2024-11-04 18:18:22 +02:00
TLS.md Build TLS as a loadable module 2022-08-23 12:37:56 +03:00

codecov

This document serves as both a quick start guide to Redis and a detailed resource for building it from source.

Table of contents

What is Redis?

For developers, who are building real-time data-driven applications, Redis is the preferred, fastest, and most feature-rich cache, data structure server, and document and vector query engine.

Key use cases

Redis excels in various applications, including:

  • Caching: Supports multiple eviction policies, key expiration, and hash-field expiration.
  • Distributed Session Store: Offers flexible session data modeling (string, JSON, hash).
  • Data Structure Server: Provides low-level data structures (strings, lists, sets, hashes, sorted sets, JSON, etc.) with high-level semantics (counters, queues, leaderboards, rate limiters) and supports transactions & scripting.
  • NoSQL Data Store: Key-value, document, and time series data storage.
  • Search and Query Engine: Indexing for hash/JSON documents, supporting vector search, full-text search, geospatial queries, ranking, and aggregations via Redis Query Engine.
  • Event Store & Message Broker: Implements queues (lists), priority queues (sorted sets), event deduplication (sets), streams, and pub/sub with probabilistic stream processing capabilities.
  • Vector Store for GenAI: Integrates with AI applications (e.g. LangGraph, mem0) for short-term memory, long-term memory, LLM response caching (semantic caching), and retrieval augmented generation (RAG).
  • Real-Time Analytics: Powers personalization, recommendations, fraud detection, and risk assessment.

Why choose Redis?

Redis is a popular choice for developers worldwide due to its combination of speed, flexibility, and rich feature set. Here's why people choose Redis for:

  • Performance: Because Redis keeps data primarily in memory and uses efficient data structures, it achieves extremely low latency (often sub-millisecond) for both read and write operations. This makes it ideal for applications demanding real-time responsiveness.
  • Flexibility: Redis isn't just a key-value store, it provides native support for a wide range of data structures and capabilities listed in What is Redis?
  • Extensibility: Redis is not limited to the built-in data structures, it has a modules API that makes it possible to extend Redis functionality and rapidly implement new Redis commands
  • Simplicity: Redis has a simple, text-based protocol and well-documented command set
  • Ubiquity: Redis is battle tested in production workloads at a massive scale. There is a good chance you indirectly interact with Redis several times daily
  • Versatility: Redis is the de facto standard for use cases such as:
    • Caching: quickly access frequently used data without needing to query your primary database
    • Session management: read and write user session data without hurting user experience or slowing down every API call
    • Querying, sorting, and analytics: perform deduplication, full text search, and secondary indexing on in-memory data as fast as possible
    • Messaging and interservice communication: job queues, message brokering, pub/sub, and streams for communicating between services
    • Vector operations: Long-term and short-term LLM memory, RAG content retrieval, semantic caching, semantic routing, and vector similarity search

In summary, Redis provides a powerful, fast, and flexible toolkit for solving a wide variety of data management challenges. If you want to know more, here is a list of starting points:

What is Redis Open Source?

Redis Community Edition (Redis CE) was renamed Redis Open Source with the v8.0 release.

Redis Ltd. also offers Redis Software, a self-managed software with additional compliance, reliability, and resiliency for enterprise scaling, and Redis Cloud, a fully managed service integrated with Google Cloud, Azure, and AWS for production-ready apps.

Read more about the differences between Redis Open Source and Redis here.

Getting started

If you want to get up and running with Redis quickly without needing to build from source, use one of the following methods:

If you prefer to build Redis from source - see instructions below.

Redis starter projects

To get started as quickly as possible in your language of choice, use one of the following starter projects:

Using Redis with client libraries

To connect your application to Redis, you will need a client library. Redis has documented client libraries in most popular languages, with community-supported client libraries in additional languages.

Using Redis with redis-cli

redis-cli is Redis' command line interface. It is available as part of all the binary distributions and when you build Redis from source.

You can start a redis-server instance, and then, in another terminal try the following:

cd src
./redis-cli
redis> ping
PONG
redis> set foo bar
OK
redis> get foo
"bar"
redis> incr mycounter
(integer) 1
redis> incr mycounter
(integer) 2
redis>

Using Redis with Redis Insight

For a more visual and user-friendly experience, use Redis Insight - a tool that lets you explore data, design, develop, and optimize your applications while also serving as a platform for Redis education and onboarding. Redis Insight integrates Redis Copilot, a natural language AI assistant that improves the experience when working with data and commands.

Redis data types, processing engines, and capabilities

Redis provides a variety of data types, processing engines, and capabilities to support a wide range of use cases:

Important: Features marked with an asterisk (*) require Redis to be compiled with the BUILD_WITH_MODULES=yes flag when building Redis from source

  • String: Sequences of bytes, including text, serialized objects, and binary arrays used for caching, counters, and bitwise operations.
  • JSON: Nested JSON documents that are indexed and searchable using JSONPath expressions and with Redis Query Engine
  • Hash: Field-value maps used to represent basic objects and store groupings of key-value pairs with support for hash field expiration (TTL)
  • Redis Query Engine: Use Redis as a document database, a vector database, a secondary index, and a search engine. Define indexes for hash and JSON documents and then use a rich query language for vector search, full-text search, geospatial queries, and aggregations.
  • List: Linked lists of string values used as stacks, queues, and for queue management.
  • Set: Unordered collection of unique strings used for tracking unique items, relations, and common set operations (intersections, unions, differences).
  • Sorted set: Collection of unique strings ordered by an associated score used for leaderboards and rate limiters.
  • Vector set (beta): Collection of vector embeddings used for semantic similarity search, semantic caching, semantic routing, and Retrieval Augmented Generation (RAG).
  • Geospatial indexes: Coordinates used for finding nearby points within a given radius or bounding box.
  • Bitmap: A set of bit-oriented operations defined on the string type used for efficient set representations and object permissions.
  • Bitfield: Binary-encoded strings that let you set, increment, and get integer values of arbitrary bit length used for limited-range counters, numeric values, and multi-level object permissions such as role-based access control (RBAC)
  • Hyperloglog: A probabilistic data structure for approximating the cardinality of a set used for analytics such as counting unique visits, form fills, etc.
  • *Bloom filter: A probabilistic data structure to check if a given value is present in a set. Used for fraud detection, ad placement, and unique column (i.e. username/email/slug) checks.
  • *Cuckoo filter: A probabilistic data structure for checking if a given value is present in a set while also allowing limited counting and deletions used in targeted advertising and coupon code validation.
  • *t-digest: A probabilistic data structure used for estimating the percentile of a large dataset without having to store and order all the data points. Used for hardware/software monitoring, online gaming, network traffic monitoring, and predictive maintenance.
  • *Top-k: A probabilistic data structure for finding the most frequent values in a data stream used for trend discovery.
  • *Count-min sketch: A probabilistic data structure for estimating how many times a given value appears in a data stream used for sales volume calculations.
  • Time series: Data points indexed in time order used for monitoring sensor data, asset tracking, and predictive analytics
  • Pub/sub: A lightweight messaging capability. Publishers send messages to a channel, and subscribers receive messages from that channel.
  • Stream: An append-only log with random access capabilities and complex consumption strategies such as consumer groups. Used for event sourcing, sensor monitoring, and notifications.
  • Transaction: Allows the execution of a group of commands in a single step. A request sent by another client will never be served in the middle of the execution of a transaction. This guarantees that the commands are executed as a single isolated operation.
  • Programmability: Upload and execute Lua scripts on the server. Scripts can employ programmatic control structures and use most of the commands while executing to access the database. Because scripts are executed on the server, reading and writing data from scripts is very efficient.

Community

Redis Community Resources

Build Redis from source

This section refers to building Redis from source. If you want to get up and running with Redis quickly without needing to build from source see the Getting started section.

Build and run Redis with all data structures - Ubuntu 20.04 (Focal)

Tested with the following Docker image:

  • ubuntu:20.04
  1. Install required dependencies

    Update your package lists and install the necessary development tools and libraries:

    apt-get update
    apt-get install -y sudo
    sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf gcc-10 g++-10 libtool
    
  2. Use GCC 10 as the default compiler

    Update the system's default compiler to GCC 10:

    sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10
    
  3. Install CMake

    Install CMake using pip3 and link it for system-wide access:

    pip3 install cmake==3.31.6
    sudo ln -sf /usr/local/bin/cmake /usr/bin/cmake
    cmake --version
    

    Note: CMake version 3.31.6 is the latest supported version. Newer versions cannot be used.

  4. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd /usr/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  5. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd /usr/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  6. Build Redis

    Set the necessary environment variables and compile Redis:

    cd /usr/src/redis-<version>
    export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes
    make -j "$(nproc)" all
    
  7. Run Redis

    cd /usr/src/redis-<version>
    ./src/redis-server redis-full.conf
    

Build and run Redis with all data structures - Ubuntu 22.04 (Jammy)

Tested with the following Docker image:

  • ubuntu:22.04
  1. Install required dependencies

    Update your package lists and install the necessary development tools and libraries:

    apt-get update
    apt-get install -y sudo
    sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git cmake python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf libtool
    
  2. Install CMake

    Install CMake using pip3 and link it for system-wide access:

    pip3 install cmake==3.31.6
    sudo ln -sf /usr/local/bin/cmake /usr/bin/cmake
    cmake --version
    

    Note: CMake version 3.31.6 is the latest supported version. Newer versions cannot be used.

  3. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd /usr/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  4. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd /usr/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  5. Build Redis

    Set the necessary environment variables and build Redis:

    cd /usr/src/redis-<version>
    export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes
    make -j "$(nproc)" all
    
  6. Run Redis

    cd /usr/src/redis-<version>
    ./src/redis-server redis-full.conf
    

Build and run Redis with all data structures - Ubuntu 24.04 (Noble)

Tested with the following Docker image:

  • ubuntu:24.04
  1. Install required dependencies

    Update your package lists and install the necessary development tools and libraries:

    apt-get update
    apt-get install -y sudo
    sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git cmake python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf libtool
    
  2. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd /usr/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  3. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd /usr/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  4. Build Redis

    Set the necessary environment variables and build Redis:

    cd /usr/src/redis-<version>
    export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes
    make -j "$(nproc)" all
    
  5. Run Redis

    cd /usr/src/redis-<version>
    ./src/redis-server redis-full.conf
    

Build and run Redis with all data structures - Debian 11 (Bullseye) / 12 (Bookworm)

Tested with the following Docker images:

  • debian:bullseye
  • debian:bullseye-slim
  • debian:bookworm
  • debian:bookworm-slim
  1. Install required dependencies

    Update your package lists and install the necessary development tools and libraries:

    apt-get update
    apt-get install -y sudo
    sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git cmake python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf libtool
    
  2. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd /usr/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  3. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd /usr/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  4. Build Redis

    Set the necessary environment variables and build Redis:

    cd /usr/src/redis-<version>
    export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes
    make -j "$(nproc)" all
    
  5. Run Redis

    cd /usr/src/redis-<version>
    ./src/redis-server redis-full.conf
    

Build and run Redis with all data structures - AlmaLinux 8.10 / Rocky Linux 8.10

Tested with the following Docker images:

  • almalinux:8.10
  • almalinux:8.10-minimal
  • rockylinux/rockylinux:8.10
  • rockylinux/rockylinux:8.10-minimal
  1. Prepare the system

    For 8.10-minimal, install sudo and dnf as follows:

    microdnf install dnf sudo -y
    

    For 8.10 (regular), install sudo as follows:

    dnf install sudo -y
    

    Clean the package metadata, enable required repositories, and install development tools:

    sudo dnf clean all
    sudo tee /etc/yum.repos.d/goreleaser.repo > /dev/null <<EOF
    [goreleaser]
    name=GoReleaser
    baseurl=https://repo.goreleaser.com/yum/
    enabled=1
    gpgcheck=0
    EOF
    sudo dnf update -y
    sudo dnf groupinstall "Development Tools" -y
    sudo dnf config-manager --set-enabled powertools
    sudo dnf install -y epel-release
    
  2. Install required dependencies

    Update your package lists and install the necessary development tools and libraries:

    sudo dnf install -y --nobest --skip-broken pkg-config wget gcc-toolset-13-gcc gcc-toolset-13-gcc-c++ git make openssl openssl-devel python3.11 python3.11-pip python3.11-devel unzip rsync clang curl libtool automake autoconf jq systemd-devel
    

    Create a Python virtual environment:

    python3.11 -m venv /opt/venv
    

    Enable the GCC toolset:

    sudo cp /opt/rh/gcc-toolset-13/enable /etc/profile.d/gcc-toolset-13.sh
    echo "source /etc/profile.d/gcc-toolset-13.sh" | sudo tee -a /etc/bashrc
    
  3. Install CMake

    Install CMake 3.25.1 manually:

    CMAKE_VERSION=3.25.1
    ARCH=$(uname -m)
    if [ "$ARCH" = "x86_64" ]; then
      CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-x86_64.sh
    else
      CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-aarch64.sh
    fi
    wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/${CMAKE_FILE}
    chmod +x ${CMAKE_FILE}
    ./${CMAKE_FILE} --skip-license --prefix=/usr/local --exclude-subdir
    rm ${CMAKE_FILE}
    cmake --version
    
  4. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd /usr/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  5. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd /usr/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  6. Build Redis

    Enable the GCC toolset, set the necessary environment variables, and build Redis:

    source /etc/profile.d/gcc-toolset-13.sh
    cd /usr/src/redis-<version>
    export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes
    make -j "$(nproc)" all
    
  7. Run Redis

    cd /usr/src/redis-<version>
    ./src/redis-server redis-full.conf
    

Build and run Redis with all data structures - AlmaLinux 9.5 / Rocky Linux 9.5

Tested with the following Docker images:

  • almalinux:9.5
  • almalinux:9.5-minimal
  • rockylinux/rockylinux:9.5
  • rockylinux/rockylinux:9.5-minimal
  1. Prepare the system

    For 9.5-minimal, install sudo and dnf as follows:

    microdnf install dnf sudo -y
    

    For 9.5 (regular), install sudo as follows:

    dnf install sudo -y
    

    Clean the package metadata, enable required repositories, and install development tools:

    sudo tee /etc/yum.repos.d/goreleaser.repo > /dev/null <<EOF
    [goreleaser]
    name=GoReleaser
    baseurl=https://repo.goreleaser.com/yum/
    enabled=1
    gpgcheck=0
    EOF
    sudo dnf clean all
    sudo dnf makecache
    sudo dnf update -y
    
  2. Install required dependencies

    Update your package lists and install the necessary development tools and libraries:

    sudo dnf install -y --nobest --skip-broken pkg-config xz wget which gcc-toolset-13-gcc gcc-toolset-13-gcc-c++ git make openssl openssl-devel python3 python3-pip python3-devel unzip rsync clang curl libtool automake autoconf jq systemd-devel
    

    Create a Python virtual environment:

    python3 -m venv /opt/venv
    

    Enable the GCC toolset:

    sudo cp /opt/rh/gcc-toolset-13/enable /etc/profile.d/gcc-toolset-13.sh
    echo "source /etc/profile.d/gcc-toolset-13.sh" | sudo tee -a /etc/bashrc
    
  3. Install CMake

    Install CMake 3.25.1 manually:

    CMAKE_VERSION=3.25.1
    ARCH=$(uname -m)
    if [ "$ARCH" = "x86_64" ]; then
      CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-x86_64.sh
    else
      CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-aarch64.sh
    fi
    wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/${CMAKE_FILE}
    chmod +x ${CMAKE_FILE}
    ./${CMAKE_FILE} --skip-license --prefix=/usr/local --exclude-subdir
    rm ${CMAKE_FILE}
    cmake --version
    
  4. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd /usr/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  5. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd /usr/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  6. Build Redis

    Enable the GCC toolset, set the necessary environment variables, and build Redis:

    source /etc/profile.d/gcc-toolset-13.sh
    cd /usr/src/redis-<version>
    export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes
    make -j "$(nproc)" all
    
  7. Run Redis

    cd /usr/src/redis-<version>
    ./src/redis-server redis-full.conf
    

Build and run Redis with all data structures - macOS 13 (Ventura) and macOS 14 (Sonoma)

  1. Install Homebrew

    If Homebrew is not already installed, follow the installation instructions on the Homebrew home page.

  2. Install required packages

    export HOMEBREW_NO_AUTO_UPDATE=1
    brew update
    brew install coreutils
    brew install make
    brew install openssl
    brew install llvm@18
    brew install cmake
    brew install gnu-sed
    brew install automake
    brew install libtool
    brew install wget
    
  3. Install Rust

    Rust is required to build the JSON package.

    RUST_INSTALLER=rust-1.80.1-$(if [ "$(uname -m)" = "arm64" ]; then echo "aarch64"; else echo "x86_64"; fi)-apple-darwin
    wget --quiet -O ${RUST_INSTALLER}.tar.xz https://static.rust-lang.org/dist/${RUST_INSTALLER}.tar.xz
    tar -xf ${RUST_INSTALLER}.tar.xz
    (cd ${RUST_INSTALLER} && sudo ./install.sh)
    
  4. Download the Redis source

    Download a specific version of the Redis source code archive from GitHub.

    Replace <version> with the Redis version, for example: 8.0.0.

    cd ~/src
    wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz
    
  5. Extract the source archive

    Create a directory for the source code and extract the contents into it:

    cd ~/src
    tar xvf redis-<version>.tar.gz
    rm redis-<version>.tar.gz
    
  6. Build Redis

    cd ~/src/redis-<version>
    export HOMEBREW_PREFIX="$(brew --prefix)"
    export BUILD_WITH_MODULES=yes
    export BUILD_TLS=yes
    export DISABLE_WERRORS=yes
    PATH="$HOMEBREW_PREFIX/opt/libtool/libexec/gnubin:$HOMEBREW_PREFIX/opt/llvm@18/bin:$HOMEBREW_PREFIX/opt/make/libexec/gnubin:$HOMEBREW_PREFIX/opt/gnu-sed/libexec/gnubin:$HOMEBREW_PREFIX/opt/coreutils/libexec/gnubin:$PATH"
    export LDFLAGS="-L$HOMEBREW_PREFIX/opt/llvm@18/lib"
    export CPPFLAGS="-I$HOMEBREW_PREFIX/opt/llvm@18/include"
    mkdir -p build_dir/etc
    make -C redis-8.0 -j "$(nproc)" all OS=macos
    make -C redis-8.0 install PREFIX=$(pwd)/build_dir OS=macos
    
  7. Run Redis

    export LC_ALL=en_US.UTF-8
    export LANG=en_US.UTF-8
    build_dir/bin/redis-server redis-full.conf
    

Build and run Redis with all data structures - macOS 15 (Sequoia)

Support and instructions will be provided at a later date.

Building Redis - flags and general notes

Redis can be compiled and used on Linux, OSX, OpenBSD, NetBSD, FreeBSD. We support big endian and little endian architectures, and both 32 bit and 64-bit systems.

It may compile on Solaris derived systems (for instance SmartOS) but our support for this platform is best effort and Redis is not guaranteed to work as well as on Linux, OSX, and *BSD.

To build Redis with all the data structures (including JSON, time series, Bloom filter, cuckoo filter, count-min sketch, top-k, and t-digest) and with Redis Query Engine, make sure first that all the prerequisites are installed (see build instructions above, per operating system). You need to use the following flag in the make command:

make BUILD_WITH_MODULES=yes

To build Redis with just the core data structures, use:

make

To build with TLS support, you need OpenSSL development libraries (e.g. libssl-dev on Debian/Ubuntu) and the following flag in the make command:

make BUILD_TLS=yes

To build with systemd support, you need systemd development libraries (such as libsystemd-dev on Debian/Ubuntu or systemd-devel on CentOS), and the following flag:

make USE_SYSTEMD=yes

To append a suffix to Redis program names, add the following flag:

make PROG_SUFFIX="-alt"

You can build a 32 bit Redis binary using:

make 32bit

After building Redis, it is a good idea to test it using:

make test

If TLS is built, running the tests with TLS enabled (you will need tcl-tls installed):

./utils/gen-test-certs.sh
./runtest --tls

Fixing build problems with dependencies or cached build options

Redis has some dependencies which are included in the deps directory. make does not automatically rebuild dependencies even if something in the source code of dependencies changes.

When you update the source code with git pull or when code inside the dependencies tree is modified in any other way, make sure to use the following command in order to really clean everything and rebuild from scratch:

make distclean

This will clean: jemalloc, lua, hiredis, linenoise and other dependencies.

Also, if you force certain build options like 32bit target, no C compiler optimizations (for debugging purposes), and other similar build time options, those options are cached indefinitely until you issue a make distclean command.

Fixing problems building 32 bit binaries

If after building Redis with a 32 bit target you need to rebuild it with a 64 bit target, or the other way around, you need to perform a make distclean in the root directory of the Redis distribution.

In case of build errors when trying to build a 32 bit binary of Redis, try the following steps:

  • Install the package libc6-dev-i386 (also try g++-multilib).
  • Try using the following command line instead of make 32bit: make CFLAGS="-m32 -march=native" LDFLAGS="-m32"

Allocator

Selecting a non-default memory allocator when building Redis is done by setting the MALLOC environment variable. Redis is compiled and linked against libc malloc by default, except for jemalloc being the default on Linux systems. This default was picked because jemalloc has proven to have fewer fragmentation problems than libc malloc.

To force compiling against libc malloc, use:

make MALLOC=libc

To compile against jemalloc on Mac OS X systems, use:

make MALLOC=jemalloc

Monotonic clock

By default, Redis will build using the POSIX clock_gettime function as the monotonic clock source. On most modern systems, the internal processor clock can be used to improve performance. Cautions can be found here: http://oliveryang.net/2015/09/pitfalls-of-TSC-usage/

To build with support for the processor's internal instruction clock, use:

make CFLAGS="-DUSE_PROCESSOR_CLOCK"

Verbose build

Redis will build with a user-friendly colorized output by default. If you want to see a more verbose output, use the following:

make V=1

Running Redis with TLS

Please consult the TLS.md file for more information on how to use Redis with TLS.

Running Redis with the Query Engine and optional proprietary Intel SVS-VAMANA optimisations

License Disclaimer If you are using Redis Open Source under AGPLv3 or SSPLv1, you cannot use it together with the Intel Optimizations (Leanvec and LVQ binaries). The reason is that the Intel SVS license is not compatible with those licenses. The Leanvec and LVQ techniques are closed source and are only available for use with Redis Open Source when distributed under the RSALv2 license. For more details, please refer to the information provided by Intel here.

By default, Redis with the Redis Query Engine supports SVS-VAMANA index with global 8-bit quantisation. To compile Redis with the Intel SVS-VAMANA optimisations, LeanVec and LVQ, use the following:

make BUILD_INTEL_SVS_OPT=yes

Alternatively, you can export the variable before running the build step for your platform:

export BUILD_INTEL_SVS_OPT=yes
make

Code contributions

By contributing code to the Redis project in any form, including sending a pull request via GitHub, a code fragment or patch via private email or public discussion groups, you agree to release your code under the terms of the Redis Software Grant and Contributor License Agreement. Please see the CONTRIBUTING.md file in this source distribution for more information. For security bugs and vulnerabilities, please see SECURITY.md and the description of the ability of users to backport security patches under Redis Open Source 7.4+ under BSDv3. Open Source Redis releases are subject to the following licenses:

  1. Version 7.2.x and prior releases are subject to BSDv3. These contributions to the original Redis core project are owned by their contributors and licensed under the 3BSDv3 license as referenced in the REDISCONTRIBUTIONS.txt file. Any copy of that license in this repository applies only to those contributions;

  2. Versions 7.4.x to 7.8.x are subject to your choice of RSALv2 or SSPLv1; and

  3. Version 8.0.x and subsequent releases are subject to the tri-license RSALv2/SSPLv1/AGPLv3 at your option as referenced in the LICENSE.txt file.

Redis Trademarks

The purpose of a trademark is to identify the goods and services of a person or company without causing confusion. As the registered owner of its name and logo, Redis accepts certain limited uses of its trademarks, but it has requirements that must be followed as described in its Trademark Guidelines available at: https://redis.io/legal/trademark-policy/.