Dynamo使用一致性hash来实现partition and replication,从而达到高扩展和高可用。在实现上,对经典一致性hash进行了一些优化,本文尝试予以解释。

partitioning algorithm

One of the key design requirements for Dynamo is that it must scale incrementally. This requires a mechanism to dynamically partition the data over the set of nodes (i.e., storage hosts) in the system. Dynamo’s partitioning scheme relies on consistent hashing to distribute the load across multiple storage hosts. In consistent hashing [10], the output range of a hash function is treated as a fixed circular space or “ring” (i.e. the largest hash value wraps around to the smallest hash value). Each node in the system is assigned a random value within this space which represents its “position” on the ring. Each data item identified by a key is assigned to a node by hashing the data item’s key to yield its position on the ring, and then walking the ring clockwise to find the first node with a position larger than the item’s position.

假设我们认为hash ring的取值范围是pow(2, 32),那么经典一致性hash算法采用随机的方式计算position,例如直接random(0, pow(2,32)-1),将存储节点映射到ring上,两个存储节点的position就圈出了一个region。在读写item时,也采用任意hash函数计算范围是pow(2,32)的position,看它落在哪个region里以决定由哪个存储节点负责。

Thus, each node becomes responsible for the region in the ring between it and its predecessor node on the ring. The principle advantage of consistent hashing is that departure or arrival of a node only affects its immediate neighbors and other nodes remain unaffected.

采用经典一致性hash的好处是,当有存储节点变化时,影响较为平滑,仅邻居节点需要向新加入的节点迁移数据。

The basic consistent hashing algorithm presents some challenges. First, the random position assignment of each node on the ring leads to non-uniform data and load distribution. Second, the basic algorithm is oblivious to the heterogeneity in the performance of nodes. To address these issues, Dynamo uses a variant of consistent hashing (similar to the one used in [10, 20]): instead of mapping a node to a single point in the circle, each node gets assigned to multiple points in the ring. To this end, Dynamo uses the concept of “virtual nodes”. A virtual node looks like a single node in the system, but each node can be responsible for more than one virtual node. Effectively, when a new node is added to the system, it is assigned multiple positions (henceforth, “tokens”) in the ring. The process of fine-tuning Dynamo’s partitioning scheme is discussed in Section 6.

但是经典方法也有一些弊端。随机计算的节点position会导致不均衡,而且也没有考虑节点间的异构性。Dynamo引入virtual nodes的概念予以解决。

Replication

To achieve high availability and durability, Dynamo replicates its data on multiple hosts. Each data item is replicated at N hosts, where N is a parameter configured “per-instance”. Each key, k, is assigned to a coordinator node (described in the previous section). The coordinator is in charge of the replication of the data items that fall within its range. In addition to locally storing each key within its range, the coordinator replicates these keys at the N-1 clockwise successor nodes in the ring. This results in a system where each node is responsible for the region of the ring between it and its Nth predecessor. In Figure 2, node B replicates the key k at nodes C and D in addition to storing it locally. Node D will store the keys that fall in the ranges (A, B], (B, C], and (C, D].

为了获得高可用,Dynamo采用多副本的方式,将数据copy到ring上连续的N个节点。

The list of nodes that is responsible for storing a particular key is called the preference list. The system is designed, as will be explained in Section 4.8, so that every node in the system can determine which nodes should be in this list for any particular key. To account for node failures, preference list contains more than N nodes. Note that with the use of virtual nodes, it is possible that the first N successor positions for a particular key may be owned by less than N distinct physical nodes (i.e. a node may hold more than one of the first N positions). To address this, the preference list for a key is constructed by skipping positions in the ring to ensure that the list contains only distinct physical nodes.

regions与存储节点间的一对多映射关系被称为preference list,这个list需要在Dynamo节点间传播、共享。并保证仅存储distinct物理节点。

更进一步的优化

Dynamo uses consistent hashing to partition its key space across its replicas and to ensure uniform load distribution. A uniform key distribution can help us achieve uniform load distribution assuming the access distribution of keys is not highly skewed. In particular, Dynamo’s design assumes that even where there is a significant skew in the access distribution there are enough keys in the popular end of the distribution so that the load of handling popular keys can be spread across the nodes uniformly through partitioning. This section discusses the load imbalance seen in Dynamo and the impact of different partitioning strategies on load distribution.

关于数据倾斜,这里有几个假设的前提。首先,在keys的流量分布不是严重倾斜时,均匀的keys分布可以获得较为均衡的负载。其次,Dynamo认为当popular keys足够多时,即使流量不均衡,只要把popular keys分散开,也可以获得均衡的负载。但这反过来也就意味着,当流量不太大、且仅集中于少量popular keys的时候,可能有些节点会过载。Dynamo的分片策略也在演进如下。(另外,这里有一个隐含的概念,即负载均衡,需要数据分布均衡、流量分布均衡)

Strategy 1: T random tokens per node and partition by token value: This was the initial strategy deployed in production (and described in Section 4.2). In this scheme, each node is assigned T tokens (chosen uniformly at random from the hash space). The tokens of all nodes are ordered according to their values in the hash space. Every two consecutive tokens define a range. The last token and the first token form a range that “wraps” around from the highest value to the lowest value in the hash space. Because the tokens are chosen randomly, the ranges vary in size. As nodes join and leave the system, the token set changes and consequently the ranges change. Note that the space needed to maintain the membership at each node increases linearly with the number of nodes in the system.

策略1是最原始的方式,每个物理节点虚出T个virtual nodes,每个virtual node随机分配一个token。tokens圈出的range决定其存储数据的范围。由于采用随机的方式,故需要一个完整的映射表,才可以获悉整个Dynamo集群的存储映射关系,而该表又随着节点数的线性增长。

While using this strategy, the following problems were encountered. First, when a new node joins the system, it needs to “steal” its key ranges from other nodes. However, the nodes handing the key ranges off to the new node have to scan their local persistence store to retrieve the appropriate set of data items. Note that performing such a scan operation on a production node is tricky as scans are highly resource intensive operations and they need to be executed in the background without affecting the customer performance. This requires us to run the bootstrapping task at the lowest priority. However, this significantly slows the bootstrapping process and during busy shopping season, when the nodes are handling millions of requests a day, the bootstrapping has taken almost a day to complete. Second, when a node joins/leaves the system, the key ranges handled by many nodes change and the Merkle trees for the new ranges need to be recalculated, which is a non-trivial operation to perform on a production system. Finally, there was no easy way to take a snapshot of the entire key space due to the randomness in key ranges, and this made the process of archival complicated. In this scheme, archiving the entire key space requires us to retrieve the keys from each node separately, which is highly inefficient.

这种方式仍存在弊端。首先,新节点的加入,需要从一些old节点迁移数据,由于数据不是连续存储的,扫描较为耗时,且需后台执行。其次,由于range变化导致数据变化了,所涉及的merkle trees需要重算。最后,snapshot也不好做。(这里由于对Dynamo底层存储细节不太了解,无法细致推导)

The fundamental issue with this strategy is that the schemes for data partitioning and data placement are intertwined. For instance, in some cases, it is preferred to add more nodes to the system in order to handle an increase in request load. However, in this scenario, it is not possible to add nodes without affecting data partitioning. Ideally, it is desirable to use independent schemes for partitioning and placement. To this end, following strategies were evaluated:

根本原因是数据分片与数据分布耦合在一起了!

Strategy 2: T random tokens per node and equal sized partitions: In this strategy, the hash space is divided into Q equally sized partitions/ranges and each node is assigned T random tokens. Q is usually set such that Q >> N and Q >> S*T, where S is the number of nodes in the system. In this strategy, the tokens are only used to build the function that maps values in the hash space to the ordered lists of nodes and not to decide the partitioning. A partition is placed on the first N unique nodes that are encountered while walking the consistent hashing ring clockwise from the end of the partition. Figure 7 illustrates this strategy for N=3. In this example, nodes A, B, C are encountered while walking the ring from the end of the partition that contains key k1. The primary advantages of this strategy are: (i) decoupling of partitioning and partition placement, and (ii) enabling the possibility of changing the placement scheme at runtime.

策略2将data partition和location的概念分开,partition是数据分片,而location是由token圈定的范围决定的。另外partition是等分的,一般取值非常大。在读写item时,hash出posittion,看它落在哪几个tokens圈定的范围里。这时partition的概念感觉意义还不大,从论文里也可以看到,策略2仅是一个过渡方案。真正重要的是策略3。

Strategy 3: Q/S tokens per node, equal-sized partitions: Similar to strategy 2, this strategy divides the hash space into Q equally sized partitions and the placement of partition is decoupled from the partitioning scheme. Moreover, each node is assigned Q/S tokens where S is the number of nodes in the system. When a node leaves the system, its tokens are randomly distributed to the remaining nodes such that these properties are preserved. Similarly, when a node joins the system it “steals” tokens from nodes in the system in a way that preserves these properties.

策略3也是将hash space等分为取值较大的Q个partitions,而且partition与存储位置解耦(映射关系,而不是相等关系)。同时,每个存储节点分配Q/S个tokens(即形成Q/S个virtual nodes)。所以virtual node的存储范围其实与partition大小是一致的了,但由于物理节点与virtual节点间是1对多关系,故可以灵活的以virtual node为单位进行数据迁移。当有存储节点离开时,它的tokens随机分布到其他节点上;有节点加入时,不是随机分配新tokens了,而是从现有tokens里随机获取所需。

另外,由于tokens总量都不变,故映射表也恒定。猜测:在put/get item时,需要通过item hash出的position算出partition,再由partition映射到token(由于两者都是等分且数量一致,可以直接=或算出)、token映射到node。有节点增删时,需要all token list,从中随机获取tokens后,根据token到node的映射关系,迁移数据,并修改token到node的映射关系,这时partition到token的关系是不需要变化的。

Leave a Reply