http://blog.csdn.net/MONKEY_D_MENG/article/details/6647488

树形结构的数据库表Schema设计

 

程序设计过程中,我们常常用树形结构来表征某些数据的关联关系,如企业上下级部门、栏目结构、商品分类等等,通常而言,这些树状结构需要借助于数据库完成持久化。然而目前的各种基于关系的数据库,都是以二维表的形式记录存储数据信息,因此是不能直接将Tree存入DBMS,设计合适的Schema及其对应的CRUD算法是实现关系型数据库中存储树形结构的关键。

理想中树形结构应该具备如下特征:数据存储冗余度小、直观性强;检索遍历过程简单高效;节点增删改查CRUD操作高效。无意中在网上搜索到一种很巧妙的设计,原文是英文,看过后感觉有点意思,于是便整理了一下。本文将介绍两种树形结构的Schema设计方案:一种是直观而简单的设计思路,另一种是基于左右值编码的改进方案。

一、基本数据

本文列举了一个食品族谱的例子进行讲解,通过类别、颜色和品种组织食品,树形结构图如下:

 

二、继承关系驱动的Schema设计

对树形结构最直观的分析莫过于节点之间的继承关系上,通过显示地描述某一节点的父节点,从而能够建立二维的关系表,则这种方案的Tree表结构通常设计为:{Node_id,Parent_id},上述数据可以描述为如下图所示:

这种方案的优点很明显:设计和实现自然而然,非常直观和方便。缺点当然也是非常的突出:由于直接地记录了节点之间的继承关系,因此对Tree的任何CRUD操作都将是低效的,这主要归根于频繁的“递归”操作,递归过程不断地访问数据库,每次数据库IO都会有时间开销。当然,这种方案并非没有用武之地,在Tree规模相对较小的情况下,我们可以借助于缓存机制来做优化,将Tree的信息载入内存进行处理,避免直接对数据库IO操作的性能开销。

三、基于左右值编码的Schema设计

在基于数据库的一般应用中,查询的需求总要大于删除和修改。为了避免对于树形结构查询时的“递归”过程,基于Tree的前序遍历设计一种全新的无递归查询、无限分组的左右值编码方案,来保存该树的数据。

第一次看见这种表结构,相信大部分人都不清楚左值(Lft)和右值(Rgt)是如何计算出来的,而且这种表设计似乎并没有保存父子节点的继承关系。但当你用手指指着表中的数字从1数到18,你应该会发现点什么吧。对,你手指移动的顺序就是对这棵树进行前序遍历的顺序,如下图所示。当我们从根节点Food左侧开始,标记为1,并沿前序遍历的方向,依次在遍历的路径上标注数字,最后我们回到了根节点Food,并在右边写上了18。

第一次看见这种表结构,相信大部分人都不清楚左值(Lft)和右值(Rgt)是如何计算出来的,而且这种表设计似乎并没有保存父子节点的继承关系。但当你用手指指着表中的数字从1数到18,你应该会发现点什么吧。对,你手指移动的顺序就是对这棵树进行前序遍历的顺序,如下图所示。当我们从根节点Food左侧开始,标记为1,并沿前序遍历的方向,依次在遍历的路径上标注数字,最后我们回到了根节点Food,并在右边写上了18。

    依据此设计,我们可以推断出所有左值大于2,并且右值小于11的节点都是Fruit的后续节点,整棵树的结构通过左值和右值存储了下来。然而,这还不够,我们的目的是能够对树进行CRUD操作,即需要构造出与之配套的相关算法。

四、树形结构CRUD算法

(1)获取某节点的子孙节点

只需要一条SQL语句,即可返回该节点子孙节点的前序遍历列表,以Fruit为例:SELECT* FROM Tree WHERE Lft BETWEEN 2 AND 11 ORDER BY Lft ASC。查询结果如下所示:

    那么某个节点到底有多少的子孙节点呢?通过该节点的左、右值我们可以将其子孙节点圈进来,则子孙总数 = (右值 – 左值– 1) / 2,以Fruit为例,其子孙总数为:(11 –2 – 1) / 2 = 4。同时,为了更为直观地展现树形结构,我们需要知道节点在树中所处的层次,通过左、右值的SQL查询即可实现,以Fruit为例:SELECTCOUNT(*) FROM Tree WHERE Lft <= 2 AND Rgt >=11。为了方便描述,我们可以为Tree建立一个视图,添加一个层次数列,该列数值可以写一个自定义函数来计算,函数定义如下:

  1. CREATE FUNCTION dbo.CountLayer
  2. (
  3.     @node_id int
  4. )
  5. RETURNS int
  6. AS
  7. begin
  8.     declare @result int
  9.     set @result = 0
  10.     declare @lft int
  11.     declare @rgt int
  12.     if exists(select Node_id from Tree where Node_id = @node_id)
  13.     begin
  14.         select @lft = Lft, @rgt = Rgt from Tree where node_id = @node_id
  15.         select @result = count(*) from Tree where Lft <= @lft and Rgt >= @rgt
  16.     end
  17.     return @result
  18. end
  19. GO

基于层次计算函数,我们创建一个视图,添加了新的记录节点层次的数列:

  1. CREATE VIEW dbo.TreeView
  2. AS
  3. SELECT Node_id, Name, Lft, Rgt, dbo.CountLayer(Node_id) AS Layer FROM dbo.Tree ORDER BY Lft
  4. GO

创建存储过程,用于计算给定节点的所有子孙节点及相应的层次:

  1. CREATE PROCEDURE [dbo].[GetChildrenNodeList]
  2. (
  3.     @node_id int
  4. )
  5. AS
  6. declare @lft int
  7. declare @rgt int
  8. if exists(select Node_id from Tree where node_id = @node_id)
  9.     begin
  10.         select @lft = Lft, @rgt = Rgt from Tree where Node_id = @node_id
  11.         select * from TreeView where Lft between @lft and @rgt order by Lft ASC
  12.     end
  13. GO

现在,我们使用上面的存储过程来计算节点Fruit所有子孙节点及对应层次,查询结果如下:

 

 

从上面的实现中,我们可以看出采用左右值编码的设计方案,在进行树的查询遍历时,只需要进行2次数据库查询,消除了递归,再加上查询条件都是数字的比较,查询的效率是极高的,随着树规模的不断扩大,基于左右值编码的设计方案将比传统的递归方案查询效率提高更多。当然,前面我们只给出了一个简单的获取节点子孙的算法,真正地使用这棵树我们需要实现插入、删除同层平移节点等功能。

(2)获取某节点的族谱路径

假定我们要获得某节点的族谱路径,则根据左、右值分析只需要一条SQL语句即可完成,以Fruit为例:SELECT* FROM Tree WHERE Lft < 2 AND Rgt > 11 ORDER BY Lft ASC ,相对完整的存储过程:

  1. CREATE PROCEDURE [dbo].[GetParentNodePath]
  2. (
  3.     @node_id int
  4. )
  5. AS
  6. declare @lft int
  7. declare @rgt int
  8. if exists(select Node_id from Tree where Node_id = @node_id)
  9.     begin
  10.         select @lft = Lft, @rgt = Rgt from Tree where Node_id = @node_id
  11.         select * from TreeView where Lft < @lft and Rgt > @rgt order by Lft ASC
  12.     end
  13. GO

(3)为某节点添加子孙节点

    假定我们要在节点“Red”下添加一个新的子节点“Apple”,该树将变成如下图所示,其中红色节点为新增节点。

 

仔细观察图中节点左右值变化,相信大家都应该能够推断出如何写SQL脚本了吧。我们可以给出相对完整的插入子节点的存储过程:

  1. CREATE PROCEDURE [dbo].[AddSubNode]
  2. (
  3.     @node_id int,
  4.     @node_name varchar(50)
  5. )
  6. AS
  7. declare @rgt int
  8. if exists(select Node_id from Tree where Node_id = @node_id)
  9.     begin
  10.         SET XACT_ABORT ON
  11.         BEGIN TRANSCTION
  12.         select @rgt = Rgt from Tree where Node_id = @node_id
  13.         update Tree set Rgt = Rgt + 2 where Rgt >= @rgt
  14.         update Tree set Lft = Lft + 2 where Lft >= @rgt
  15.         insert into Tree(Name, Lft, Rgt) values(@node_name, @rgt, @rgt + 1)
  16.         COMMIT TRANSACTION
  17.         SET XACT_ABORT OFF
  18.     end
  19. GO

(4)删除某节点

如果我们想要删除某个节点,会同时删除该节点的所有子孙节点,而这些被删除的节点的个数为:(被删除节点的右值 – 被删除节点的左值+ 1) / 2,而剩下的节点左、右值在大于被删除节点左、右值的情况下会进行调整。来看看树会发生什么变化,以Beef为例,删除效果如下图所示。

则我们可以构造出相应的存储过程:

  1. CREATE PROCEDURE [dbo].[DelNode]
  2. (
  3.     @node_id int
  4. )
  5. AS
  6. declare @lft int
  7. declare @rgt int
  8. if exists(select Node_id from Tree where Node_id = @node_id)
  9.     begin
  10.         SET XACT_ABORT ON
  11.         BEGIN TRANSCTION
  12.             select @lft = Lft, @rgt = Rgt from Tree where Node_id = @node_id
  13.             delete from Tree where Lft >= @lft and Rgt <= @rgt
  14.             update Tree set Lft = Lft – (@rgt – @lft + 1) where Lft > @lft
  15.             update Tree set Rgt = Rgt – (@rgt – @lft + 1) where Rgt > @rgt
  16.             COMMIT TRANSACTION
  17.         SET XACT_ABORT OFF
  18.     end
  19. GO

五、总结

我们可以对这种通过左右值编码实现无限分组的树形结构Schema设计方案做一个总结:

(1)优点:在消除了递归操作的前提下实现了无限分组,而且查询条件是基于整形数字的比较,效率很高。

(2)缺点:节点的添加、删除及修改代价较大,将会涉及到表中多方面数据的改动。

当然,本文只给出了几种比较常见的CRUD算法的实现,我们同样可以自己添加诸如同层节点平移、节点下移、节点上移等操作。有兴趣的朋友可以自己动手编码实现一下,这里不在列举了。值得注意的是,实现这些算法可能会比较麻烦,会涉及到很多条update语句的顺序执行,如果顺序调度考虑不周详,出现Bug的话将会对整个树形结构表产生惊人的破坏。因此,在对树形结构进行大规模修改的时候,可以采用临时表做中介,以降低代码的复杂度,同时,强烈推荐在做修改之前对表进行完整备份,以备不时之需。在以查询为主的绝大多数基于数据库的应用系统中,该方案相比传统的由父子继承关系构建的数据库Schema更为适用。

 

 

https://communities.bmc.com/communities/docs/DOC-9902

Trees in SQL: Nested Sets and Materialized Path

by Vadim Tropashko

 

Relational databases are universally conceived of as an advance over their  predecessors network and hierarchicalmodels.  Superior in every querying respect, they turned out to be surprisingly  incomplete when modeling transitive dependencies. Almost every couple of months  a question about how to model a tree in the database pops up at the  comp.database.theory newsgroup. In this article I’ll investigate two out of four  well known approaches to accomplishing this and show a connection between them.  We’ll discover a new method that could be considered as a “mix-in” between  materialized path and nested sets.

Adjacency List

Tree structure is a special case of Directed Acyclic Graph (DAG). One way to  represent DAG structure is:

 

create table emp (
ename   varchar2(100),
mgrname varchar2(100)
);

 

Each record of the emp table identified by ename is referring to its parent  mgrname. For example, if JONES reports to KING, then the emp table contains  <ename=’JONES’, mgrname=’KING’> record. Suppose, the emp table also  includes <ename=’SCOTT’, mgrname=’JONES’>. Then, if the emp table doesn’t  contain the <ename=’SCOTT’, mgrname=’KING’> record, and the same is true  for every pair of adjoined records, then it is called adjacency  list. If the opposite is true, then the emp table is a transitively closed relation.

A typical hierarchical query would ask if SCOTT indirectly reports to KING.  Since we don’t know the number of levels between the two, we can’t tell how many  times to selfjoin emp, so that the task can’t be solved in traditional SQL. If  transitive closure tcemp of the emp table is known, then the query is  trivial:

 

select ‘TRUE’ from tcemp
where ename = ‘SCOTT’ and mgrname = ‘KING’

 

The ease of querying comes at the expense of transitive closure  maintenance.

Alternatively, hierarchical queries can be answered with SQL extensions:  either SQL3/DB2 recursive query

 

with tcemp as (
select ename,mgrname from tcemp
union
select tcemp.ename,emp.mgrname from tcemp,emp
where tcemp.mgrname = emp.ename
) select ‘TRUE’ from tcemp
where ename = ‘SCOTT’ and mgrname = ‘KING';

 

that calculates tcemp as an intermediate relation, or Oracle proprietary  connect-by syntax

 

select ‘TRUE’ from (
select ename from emp
connect by prior mgrname = ename
start with ename = ‘SCOTT’
) where ename = ‘KING';

 

in which the inner query “chases the pointers” from the SCOTT node to the  root of the tree, and then the outer query checks whether the KING node is on  the path.

Adjacency list is arguably the most intuitive tree model. Our main focus,  however, would be the following two methods.

Materialized Path

In this approach each record stores the whole path to the root. In our  previous example, lets assume that KING is a root node. Then, the record with  ename = ‘SCOTT’ is connected to the root via the path SCOTT->JONES->KING.  Modern databases allow representing a list of nodes as a single value, but since  materialized path has been invented long before then, the convention stuck to  plain character string of nodes concatenated with some separator; most often ‘.’  or ‘/’. In the latter case, an analogy to pathnames in UNIX file system is  especially pronounced.

In more compact variation of the method, we use sibling numerators instead of  node’s primary keys within the path string. Extending our example:

 

 

ENAME PATH
KING 1
JONES 1.1
SCOTT 1.1.1
ADAMS 1.1.1.1
FORD 1.1.2
SMITH 1.1.2.1
BLAKE 1.2
ALLEN 1.2.1
WARD 1.2.2
CLARK 1.3
MILLER 1.3.1

 

Path 1.1.2 indicates that FORD is the second child of  the parent JONES.

Let’s write some queries.

1. An employee FORD and chain of his supervisors:

 

select e1.ename from emp e1, emp e2
where e2.path like e1.path || ‘%’
and e2.name = ‘FORD’

 

2. An employee JONES and all his (indirect) subordinates:

 

select e1.ename from emp e1, emp e2
where e1.path like e2.path || ‘%’
and e2.name = ‘JONES’

 

Although both queries look symmetrical, there is a fundamental difference in  their respective performances. If a subtree of subordinates is small compared to  the size of the whole hierarchy, then the execution where database fetches e2  record by the name primary key, and then performs a range scan of  e1.path,  which is guaranteed to be quick.

On the other hand, the “supervisors” query is roughly equivalent to

 

select e1.ename from emp e1, emp e2
where e2.path > e1.path and e2.path < e1.path || ‘Z’
and e2.name = ‘FORD’

 

Or, noticing that we essentially know e2.path, it can further be reduced  to

 

select e1.ename from emp e1
where e2path > e1.path and e2path < e1.path || ‘Z’

 

Here, it is clear that indexing on path doesn’t work (except for “accidental”  cases in which e2path happens to be near the domain boundary, so that  predicate e2path > e1.path is selective).

The obvious solution is that we don’t have to refer to the database to figure  out all the supervisor paths! For example, supervisors of 1.1.2 are 1.1 and 1. A  simple recursive string parsing function can extract those paths, and then the  supervisor names can be answered by

 

select e1.ename from emp where e1.path in (‘1.1′,’1′)

 

which should be executed as a fast concatenated plan.

Nested Sets

Both the materialized path and Joe  Celko’s nested sets provide the capability to answer hierarchical  queries with standard SQL syntax. In both models, the global  position of the node in the hierarchy is “encoded” as opposed to an adjacency  list of which each link is a local connection between immediate  neighbors only. Similar to materialized path, the nested sets model suffers from  supervisors query performance problem:

 

select p2.emp from Personnel p1, Personnel p2
where p1.lft between p2.lft and p2.rgt
and p1.emp = ‘Chuck’

 

(Note: This query is borrowed from the previously  cited Celko article). Here, the problem is even more explicit than in  the case of a materialized path: we need to find all the intervals that cover a  given point. This problem is known to be difficult. Although there are  specialized indexing schemes like R-Tree, none of them is as universally  accepted as B-Tree. For example, if the supervisor’s path contains just 10 nodes  and the size of the whole tree is 1000000, none of indexing techniques could  provide 1000000/10=100000 times performance increase. (Such a performance  improvement factor is typically associated with index range scan in a similar,  very selective, data volume condition.)

 

Unlike a materialized path, the trick by which we computed all the nodes  without querying the database doesn’t work for nested sets.

 

Another — more fundamental — disadvantage of nested sets is that nested sets  coding is volatile. If we insert a node into the middle of the  hierarchy, all the intervals with the boundaries above the insertion point have  to be recomputed. In other words, when we insert a record into the database,  roughly half of the other records need to be updated. This is why the nested  sets model received only limited acceptance for static hierarchies.

 

Nested sets are intervals of integers. In an attempt to make the nested sets  model more tolerant to insertions, Celko suggested we give up the property that  each node always has (rgt-lft+1)/2 children. In my opinion, this is a half-step  towards a solution: any gap in a nested  set model with large gaps and spreads in the numbering still could be  covered with intervals leaving no space for adding more children, if those  intervals are allowed to have boundaries at discrete points  (i.e., integers) only. One needs to use a dense domain like rational, or real  numbers instead.

Nested Intervals

Nested intervals generalize nested sets. A node [clft, crgt] is an (indirect)  descendant of [plft, prgt] if:

 

plft <= clft and crgt >= prgt

 

The domain for interval boundaries is not limited by integers anymore: we  admit rational or even real numbers, if necessary. Now, with a reasonable  policy, adding a child node is never a problem. One example of such a policy  would be finding an unoccupied segment [lft1, rgt1] within a parent interval  [plft, prgt] and inserting a child node [(2*lft1+rgt1)/3,  (rgt1+2*lft)/3]:

trees-in-sql-fig1.bmp

After insertion, we still have two more unoccupied segments  [lft1,(2*lft1+rgt1)/3] and [(rgt1+2*lft)/3,rgt1] to add more children to the  parent node.

We are going to amend this naive policy in the following sections.

Partial Order

Let’s look at two-dimensional picture of nested intervals. Let’s assume that  rgt is a horizontal axis x, and lft is a vertical one – y. Then, the nested  intervals tree looks like this:

trees-in-sql-fig2.bmp

Each node [lft, rgt] has its descendants bounded within the two-dimensional  cone y >= lft & x <= rgt. Since the right interval boundary  is always less than the left one, none of the nodes are allowed above the  diagonal y = x.

The other way to look at this picture is to notice that a child node is a  descendant of the parent node whenever a set of all points defined by the child  cone y >= clft & x <= crgt is a subset of the parent cone y >= plft  & x <= prgt. A subset relationship between the cones on the plane is a  partial order.

Now that we know the two constraints to which tree nodes conform, I’ll  describe exactly how to place them at the xy plane.

The Mapping

Tree root choice is completely arbitrary: we’ll assume the interval [0,1] to  be the root node. In our geometrical interpretation, all the tree nodes belong  to the lower triangle of the unit square at the xy plane.

 

We’ll describe further details of the mapping by induction. For each node of  the tree, let’s first define two important points at the xy plane. The  depth-first convergence point is an intersection between the diagonal  and the vertical line through the node. For example, the depth-first convergence  point for <x=1,y=1/2> is  <x=1,y=1>. The breadth-first convergence point  is an intersection between the diagonal and the horizontal line through the  point. For example, the breadth-first convergence point for  <x=1,y=1/2> is  <x=1/2,y=1/2>.

 

Now, for each parent node, we define the position of the first child as a  midpoint halfway between the parent point and depth-first convergence point.  Then, each sibling is defined as a midpoint halfway between the previous sibling  point and breadth-first convergence point:

trees-in-sql-fig3.bmp

For example, node 2.1 is positioned at x=1/2, y=3/8.

 

Now that the mapping is defined, it is clear which dense domain we are using:  it’s not rationals, and not reals either, but binary fractions (although, the  former two would suffice, of course).

 

Interestingly, the descendant subtree for the parent node “1.2” is a scaled  down replica of the subtree at node “1.1.” Similarly, a subtree at node 1.1 is a  scaled down replica of the tree at node “1.” A structure with self-similarities  is called a fractal.

Normalization

Next, we notice that x and y are not completely independent. We can tell what  are both x and y if we know their sum. Given the  numerator and denominator of the rational number representing the sum of the  node coordinates, we can calculate x and y coordinates back as:

 

function x_numer( numer integer, denom integer )
RETURN integer IS
ret_num integer;
ret_den integer;
BEGIN
ret_num := numer+1;
ret_den := denom*2;
while floor(ret_num/2b = ret_num/2 loop
ret_num := ret_num/2;
ret_den := ret_den/2;
end loop;
RETURN ret_num;
END;

function x_denom( numer integer, denom integer )

RETURN ret_den;
END;

 

 

in which function x_denom body differs from x_numer in the return variable  only. Informally, numer+1 increment would move the ret_num/ret_den point  vertically up to the diagonal, and then x coordinate is half of the value, so we  just multiplied the denominator by two. Next, we reduce both numerator and  denominator by the common power of two.

Naturally, y coordinate is defined as a complement to the sum:

 

function y_numer( numer integer, denom integer )
RETURN integer IS
num integer;
den integer;
BEGIN
num := x_numer(numer, denom);
den := x_denom(numer, denom);
while den < denom loop
num := num*2;
den := den*2;
end loop;
num := numer – num;
while floor(num/2) = num/2 loop
num := num/2;
den := den/2;
end loop;
RETURN num;
END;

function y_denom( numer integer, denom integer )

RETURN den;
END;

 

 

Now, the test (where 39/32 is the node 1.3.1):

 

select x_numer(39,32)||’/’||x_denom(39,32),
y_numer(39,32)||’/’||y_denom(39,32) from dual

 

5/8     19/32

 

select 5/8+19/32, 39/32 from dual

 

1.21875 1.21875

 

I don’t use a floating point to represent rational numbers, and wrote all the  functions with integer arithmetics instead. To put it bluntly, the floating  point number concept in general, and the IEEE standard in particular, is useful  for rendering 3D-game graphics only. In the last test, however, we used a  floating point just to verify that 5/8 and 19/32, returned by the previous  query, do indeed add to 39/32.

 

We’ll store two integer numbers — numerator and denominator  of the sum of the coordinates x and y — as an encoded  node path. Incidentally, Celko’s nested sets use two integers as well.  Unlike nested sets, our mapping is stable: each node has a  predefined placement at the xy plane, so that the queries involving  node position in the hierarchy could be answered without reference to the  database. In this respect, our hierarchy model is essentially a materialized  path encoded as a rational number.

Finding Parent Encoding and Sibling Number

Given a child node with numer/denom encoding, we find the node’s parent like  this:

 

function parent_numer( numer integer, denom integer )
RETURN integer IS
ret_num integer;
ret_den integer;
BEGIN
if numer=3 then
return NULL;
end if;
ret_num := (numer-1)/2;
ret_den := denom/2;
while floor((ret_num-1)/4) = (ret_num-1)/4 loop
ret_num := (ret_num+1)/2;
ret_den := ret_den/2;
end loop;
RETURN ret_num;
END;

function parent_denom( numer integer, denom integer )

RETURN ret_den;
END;

 

 

The idea behind the algorithm is the following: If the node is on the very  top level — and all these nodes have a numerator equal to 3 — then the node has  no parent. Otherwise, we must move vertically down the xy plane at a  distance equal to the distance from the depth-first convergence point. If the  node happens to be the first child, then that is the answer. Otherwise, we must  move horizontally at a distance equal to the distance from the breadth-first  convergence point until we meet the parent node.

Here is the test of the method (in which 27/32 is the node 2.1.2, while 7/8  is 2.1):

 

select parent_numer(27,32)||’/’||parent_denom(27,32) from dual

7/8

 

 

In the previous method, counting the steps when navigating horizontally would  give the sibling number:

 

function sibling_number( numer integer, denom integer )

RETURN integer IS
ret_num integer;
ret_den integer;
ret integer;
BEGIN
if numer=3 then
return NULL;
end if;
ret_num := (numer-1)/2;
ret_den := denom/2;
ret     := 1;
while floor((ret_num-1)/4) = (ret_num-1)/4 loop
if ret_num=1 and ret_den=1 then
return ret;
end if;
ret_num := (ret_num+1)/2;
ret_den := ret_den/2;
ret     := ret+1;
end loop;
RETURN ret;
END;

 

For a node at the very first level a special stop condition, ret_num=1  and ret_den=1 is needed.

The test:

 

select sibling_number(7,8) from dual

1

 

Calculating Materialized Path and Distance between nodes

Strictly speaking, we don’t have to use a materialized path, since our  encoding is an alternative. On the other hand, a materialized path provides a  much more intuitive visualization of the node position in the hierarchy, so that  we can use the materialized path for input and output of the data if we provide  the mapping to our model.

Implementation is a simple application of the methods from the previous  section. We print the sibling number, jump to the parent, then repeat the above  two steps until we reach the root:

 

function path( numer integer, denom integer )
RETURN varchar2 IS
BEGIN
if numer is NULL then
return ”;
end if;
RETURN path(parent_numer(numer, denom),
parent_denom(numer, denom))
|| ‘.’ || sibling_number(numer, denom);
END;

 

select path(15,16) from dual

.2.1.1

 

Now we are ready to write the main query: given the 2 nodes, P and C,  when P is the parent of C? A more general query would return the number of  levels between P and C if C is reachable from P, and some exception indicator;  otherwise:

 

function distance( num1 integer, den1 integer,
num2 integer, den2 integer )
RETURN integer IS
BEGIN
if num1 is NULL then
return -999999;
end if;
if num1=num2 and den1=den2 then
return 0;
end if;
RETURN 1+distance(parent_numer(num1, den1),
parent_denom(num1, den1),
num2,den2);
END;

 

select distance(27,32,3,4) from dual

2

 

Negative numbers are interpreted as exceptions. If the num1/den1 node is not  reachable from num2/den2, then the navigation converges to the root, and  level(num1/den1)-999999 would be returned (readers are advised to find a less  clumsy solution).

 

The alternative way to answer whether two nodes are connected is by simply  calculating the x and y coordinates, and checking if the parent interval  encloses the child. Although none of the methods refer to disk, checking whether  the partial order exists between the points seems much less expensive! On the  other hand, it is just a computer architecture artifact that comparing two  integers is an atomic operation. More thorough implementation of the method  would involve a domain of integers with a unlimited range (those kinds of  numbers are supported by computer algebra systems), so that a comparison  operation would be iterative as well.

 

Our system wouldn’t be complete without a function inverse to the path, which  returns a node’s numer/denom value once the path is provided. Let’s introduce  two auxiliary functions, first:

 

function child_numer
( num integer, den integer, child integer )
RETURN integer IS
BEGIN
RETURN num*power(2, child)+3-power(2, child);
END;

 

function child_denom
( num integer, den integer, child integer )
RETURN integer IS
BEGIN
RETURN den*power(2, child);
END;

 

select child_numer(3,2,3) || ‘/’ ||
child_denom(3,2,3) from dual

19/16

 

For example, the third child of the node 1 (encoded as 3/2) is the node 1.3  (encoded as 19/16).

The path encoding function is:

function path_numer( path varchar2 )
RETURN integer IS
num integer;
den integer;
postfix varchar2(1000);
sibling varchar2(100);
BEGIN
num := 1;
den := 1;
postfix := ‘.’ || path || ‘.';

 

while length(postfix) > 1 loop
sibling := substr(postfix, 2,
instr(postfix,’.’,2)-2);
postfix := substr(postfix,
instr(postfix,’.’,2),
length(postfix)
-instr(postfix,’.’,2)+1);
num := child_numer(num,den,to_number(sibling));
den := child_denom(num,den,to_number(sibling));
end loop;

 

RETURN num;
END;

 

function path_denom( path varchar2 )

RETURN den;
END;

 

select path_numer(‘2.1.3′) || ‘/’ ||
path_denom(‘2.1.3′) from dual
51/64

The Final Test

Now that the infrastructure is completed, we can test it. Let’s create the  hierarchy

create table emps (
name varchar2(30),
numer integer,
denom integer
)

alter table emps
ADD CONSTRAINT uk_name UNIQUE (name) USING INDEX
(CREATE UNIQUE INDEX name_idx on emps(name))
ADD CONSTRAINT UK_node
UNIQUE (numer, denom) USING INDEX
(CREATE UNIQUE INDEX node_idx on emps(numer, denom))

 

and fill it with some data:

 

insert into emps values (‘KING’,
path_numer(‘1′),path_denom(‘1′));
insert into emps values (‘JONES’,
path_numer(‘1.1′),path_denom(‘1.1′));
insert into emps values (‘SCOTT’,
path_numer(‘1.1.1′),path_denom(‘1.1.1′));
insert into emps values (‘ADAMS’,
path_numer(‘1.1.1.1′),path_denom(‘1.1.1.1′));
insert into emps values (‘FORD’,
path_numer(‘1.1.2′),path_denom(‘1.1.2′));
insert into emps values (‘SMITH’,
path_numer(‘1.1.2.1′),path_denom(‘1.1.2.1′));
insert into emps values (‘BLAKE’,
path_numer(‘1.2′),path_denom(‘1.2′));
insert into emps values (‘ALLEN’,
path_numer(‘1.2.1′),path_denom(‘1.2.1′));
insert into emps values (‘WARD’,
path_numer(‘1.2.2′),path_denom(‘1.2.2′));
insert into emps values (‘MARTIN’,
path_numer(‘1.2.3′),path_denom(‘1.2.3′));
insert into emps values (‘TURNER’,
path_numer(‘1.2.4′),path_denom(‘1.2.4′));
insert into emps values (‘CLARK’,
path_numer(‘1.3′),path_denom(‘1.3′));
insert into emps values (‘MILLER’,
path_numer(‘1.3.1′),path_denom(‘1.3.1′));
commit;

 

All the functions written in the previous sections are conveniently combined  in a single view:

 

create or replace
view hierarchy as
select name, numer, denom,
y_numer(numer,denom) numer_left,
y_denom(numer,denom) denom_left,
x_numer(numer,denom) numer_right,
x_denom(numer,denom) denom_right,
path (numer,denom) path,
distance(numer,denom,3,2) depth
from emps

 

And, finally, we can create the hierarchical reports.

    • Depth-first enumeration, ordering by left interval  boundary

 

select lpad(‘ ‘,3*depth)||name
from hierarchy order by numer_left/denom_left

 

LPAD(”,3*DEPTH)||NAME
———————————————–
KING
CLARK
MILLER
BLAKE
TURNER
MARTIN
WARD
ALLEN
JONES
FORD
SMITH
SCOTT
ADAMS

 

    • Depth-first enumeration, ordering by right interval  boundary

 

select lpad(‘ ‘,3*depth)||name
from hierarchy order by numer_right/denom_right desc

 

LPAD(”,3*DEPTH)||NAME
—————————————————–
KING
JONES
SCOTT
ADAMS
FORD
SMITH
BLAKE
ALLEN
WARD
MARTIN
TURNER
CLARK
MILLER

 

    • Depth-first enumeration, ordering by path (output identical to  #2)

 

select lpad(‘ ‘,3*depth)||name
from hierarchy order by path

 

LPAD(”,3*DEPTH)||NAME
—————————————————–
KING
JONES
SCOTT
ADAMS
FORD
SMITH
BLAKE
ALLEN
WARD
MARTIN
TURNER
CLARK
MILLER

    • All the descendants of JONES, excluding himself:

 

select h1.name from hierarchy h1, hierarchy h2
where h2.name = ‘JONES’
and distance(h1.numer, h1.denom,
h2.numer, h2.denom)>0;

NAME
——————————
SCOTT
ADAMS
FORD
SMITH

    • All the ancestors of FORD, excluding himself:

 

select h2.name from hierarchy h1, hierarchy h2
where h1.name = ‘FORD’
and distance(h1.numer, h1.denom,
h2.numer, h2.denom)>0;

 

NAME
——————————
KING
JONES

Vadim  Tropashko works for Real World Performance group at Oracle  Corp. In prior life he was application programmer and translated “The C++  Programming Language” by B.Stroustrup, 2nd edition into Russian. His current  interests include SQL Optimization, Constraint Databases, and Computer Algebra  Systems.

Leave a Reply