Docs Self-Managed Develop Produce Data Configure Leader Pinning Configure Leader Pinning Page options Copy as Markdown Copied! View as plain text Ask AI about this topic Add MCP server to VS Code Produce requests that write data to Redpanda topics are routed through the topic partition leader, which syncs messages across its follower replicas. For a Redpanda cluster deployed across multiple availability zones (AZs), Leader Pinning ensures that a topic’s partition leaders are geographically closer to clients, which helps decrease networking costs and guarantees lower latency. If consumers are located in the same preferred region or AZ for Leader Pinning, and you have not set up follower fetching, Leader Pinning can also help reduce networking costs on consume requests. After reading this page, you will be able to: Configure preferred partition leader placement using rack labels Configure ordered rack preference for priority-based leader failover Identify conditions where Leader Pinning cannot place leaders in preferred racks Prerequisites This feature requires an enterprise license. To get a trial license key or extend your trial period, generate a new trial license key. To purchase a license, contact Redpanda Sales. If Redpanda has enterprise features enabled and it cannot find a valid license, restrictions apply. Before you can enable Leader Pinning, you must configure rack awareness on the cluster. If the enable_rack_awareness cluster configuration property is set to false, Leader Pinning is disabled across the cluster. Set leader rack preferences You can configure Leader Pinning at the topic level, the cluster level, or both. Set the topic configuration property to configure individual topics, or set the cluster configuration property to apply a default for all topics. You can also combine both: apply a cluster-wide default, then override specific topics with the topic property. This configuration is based on the following scenario: you have Redpanda deployed in a multi-AZ or multi-region cluster, and you have configured each broker so that the rack configuration property contains racks corresponding to the AZs: Set the topic configuration property redpanda.leaders.preference. This property accepts the following string values: none: Disable Leader Pinning for the topic. racks:<rack1>[,<rack2>,…]: Specify the preferred location (rack) of all topic partition leaders. The list can contain one or more racks, and you can list the racks in any order. Spaces in the list are ignored, for example: racks:rack1,rack2 and racks: rack1, rack2 are equivalent. You cannot specify empty racks, for example: racks: rack1,,rack2. If you specify multiple racks, Redpanda tries to distribute the partition leader locations equally across brokers in these racks. ordered_racks:<rack1>[,<rack2>,…]: Supported in Redpanda v26.1 or later. Specify the preferred racks in priority order. Redpanda places leaders in the first listed rack when available, failing over to each subsequent rack when higher-priority racks are unavailable. If all listed racks are unavailable, leaders fall back to any other available brokers. Brokers with no rack assignment are treated as lowest priority. Use ordered_racks for multi-region deployments with a primary region for leaders and explicit failover to a disaster recovery site. The redpanda.leaders.preference property inherits the default value from the cluster property default_leaders_preference. To find the rack identifiers of all brokers, run: rpk cluster info Expected output CLUSTER ======= redpanda.be267958-279d-49cd-ae86-98fc7ed2de48 BROKERS ======= ID HOST PORT RACK 0* 54.70.51.189 9092 us-west-2a 1 35.93.178.18 9092 us-west-2b 2 35.91.121.126 9092 us-west-2c To set the topic property: rpk topic alter-config <topic-name> --set redpanda.leaders.preference=ordered_racks:<rack1>,<rack2> Set the cluster configuration property default_leaders_preference, which specifies the default Leader Pinning configuration for all topics that don’t have redpanda.leaders.preference explicitly set. It accepts values in the same format as redpanda.leaders.preference, where the default is none. This property also affects internal topics, such as __consumer_offsets and transaction coordinators. All offset tracking and transaction coordination requests get placed within the preferred regions or AZs for all clients, so you see end-to-end latency and networking cost benefits. To set the cluster property: rpk cluster config set default_leaders_preference ordered_racks:<rack1>,<rack2> If there is more than one broker in the preferred AZ (or AZs), Leader Pinning distributes partition leaders uniformly across brokers in the AZ. Limitations Leader Pinning controls which replica is elected as leader, and does not move replicas to different brokers. If all of a topic’s replicas are on brokers in non-preferred racks, no replica exists in the preferred racks to elect as leader, and Redpanda may elect a non-preferred leader indefinitely. For example, consider a cluster deployed across four racks (A, B, C, D) with Leader Pinning configured as ordered_racks:A,B,C,D. With a replication factor of 3, rack awareness can only place replicas in three of the four racks. If the highest-priority rack (A) does not receive a replica, no replica exists there to elect as leader, and Redpanda may elect a non-preferred leader indefinitely. To prevent this scenario: Enable enable_rack_awareness to distribute replicas across racks automatically. Ensure the topic’s replication factor at least equals the total number of racks in the cluster, so every rack, including the highest-priority rack, receives a replica. Leader Pinning failover across availability zones If there are three AZs: A, B, and C, and A becomes unavailable, the failover behavior with racks is as follows: The topic with A as the preferred leader AZ will have its partition leaders uniformly distributed across B and C. The topic with A,B as the preferred leader AZs will have its partition leaders in B. The topic with B as the preferred leader AZ will have its partition leaders in B as well. Failover with ordered rack preference With ordered_racks, the failover order follows the configured priority list. Leaders move to the next available rack in the list when higher-priority racks become unavailable. For a topic configured with ordered_racks:A,B,C: The topic with A as the first-priority rack will have its partition leaders in A. If A becomes unavailable, leaders move to B. If A and B become unavailable, leaders move to C. If A, B, and C all become unavailable, leaders fall back to any available brokers. If a higher-priority rack recovers and the topic’s replication factor ensures that rack receives a replica, Redpanda automatically moves leaders back to the highest available preferred rack. Suggested reading Follower Fetching Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Idempotent Producers Consume Data