good post about trees


I know that performance never is black and white, often one implementation is faster in case X and slower in case Y, etc. but in general - are B-trees faster then AVL or RedBlack-Trees? They are considerably more complex to implement then AVL trees (and maybe even RedBlack-trees?), but are they faster (does their complexity pay off) ?


Edit: I should also like to add that if they are faster then the equivalent AVL/RedBlack tree (in terms of nodes/content) - why are they faster?


algorithm math data-structures binary-tree
share|improve this question
edited Feb 4 '10 at 17:15


deft_code
19k870144
asked Mar 15 '09 at 9:20


thr
6,675136296
closed as off topic by Parag Bafna, ethrbunny, Laurent Etiemble, Smi, bmargulies Feb 17 '13 at 14:15
Questions on Stack Overflow are expected to relate to programming within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.
If this question can be reworded to fit the rules in the help center, please edit the question.


add comment
9 Answers
activeoldestvotes
up vote
64
down vote
accepted
Sean's post (the currently accepted one) is full of nonsense. Sorry Sean, I don't mean to be rude; I hope I can convince you that my statement is based in fact.


They're totally different in their use cases, so it's not possible to make a comparison.


They're both used for maintaining a set of totally ordered items with fast lookup, insertion and deletion. They have the same interface and the same intention.


RB trees are typically in-memory structures used to provide fast access (ideally O(logN)) to data. [...]


always O(log n)


B-trees are typically disk-based structures, and so are inherently slower than in-memory data.


Nonsense. When you store search trees on disk, you typically use B-trees. That much is true. When you store data on disk, it's slower to access than data in memory. But a red-black tree stored on disk is also slower than a red-black tree stored in memory.


You're comparing apples and oranges here. What is really interesting is a comparison of in-memory B-trees and in-memory red-black trees.


[As an aside: B-trees, as opposed to red-black trees, are theoretically efficient in the I/O-model. I have experimentally tested (and validated) the I/O-model for sorting; I'd expect it to work for B-trees as well.]


B-trees are rarely binary trees, the number of children a node can have is typically a large number.


To be clear, the size range of B-tree nodes is a parameter of the tree (in C++, you may want to use an integer value as a template parameter).


The management of the B-tree structure can be quite complicated when the data changes.


I remember them to be much simpler to understand (and implement) than red-black trees.


B-tree try to minimize the number of disk accesses so that data retrieval is reasonably deterministic.


That much is true.


It's not uncommon to see something like 4 B-tree access necessary to lookup a bit of data in a very database.


Got data?


In most cases I'd say that in-memory RB trees are faster.


Got data?


Because the lookup is binary it's very easy to find something. B-tree can have multiple children per node, so on each node you have to scan the node to look for the appropriate child. This is an O(N) operation.


The size of each node is a fixed parameter, so even if you do a linear scan, it's O(1). If we big-oh over the size of each node, note that you typically keep the array sorted so it's O(log n).


On a RB-tree it'd be O(logN) since you're doing one comparison and then branching.


You're comparing apples and oranges. The O(log n) is because the height of the tree is at most O(log n), just as it is for a B-tree.


Also, unless you play nasty allocation tricks with the red-black trees, it seems reasonable to conjecture that B-trees have better caching behavior (it accesses an array, not pointers strewn about all over the place, and has less allocation overhead increasing memory locality even more), which might help it in the speed race.


I can point to experimental evidence that B-trees (with size parameters 32 and 64, specifically) are very competitive with red-black trees for small sizes, and outperforms it hands down for even moderately large values of n. See http://idlebox.net/2007/stx-btree/stx-btree-0.8.3/doxygen-html/speedtest.html


B-trees are faster. Why? I conjecture that it's due to memory locality, better caching behavior and less pointer chasing (which are, if not the same things, overlapping to some degree).


share|improve this answer
edited Jul 3 '09 at 11:58


answered Mar 18 '09 at 9:57


Jonas K?lker
4,5102440
20  
Although useful and making good points, I will not vote for a post with this hostile tone. C  San Jacinto Jun 18 '09 at 12:10
4  
This directly contradicts what a lot of algorithm books say. On the other hand, it actually makes sense. +1 for insight. C  Konrad Rudolph Jun 18 '09 at 12:17
3  
Algorithm books usually say something about the machine assumptions in the front matter, and those assumptions are simply no longer valid. C  Stephan Eggermont Jul 2 '09 at 14:47
27  
How is this post in any way hostile? The original accepted answer was misleading and incorrect, and the most 'hostile' word he used was 'nonsense'. Hardly gonna hurt a grown mans feelings. +1 for insight as well, and the reference to the experiment :) C  nevelis Jun 17 '11 at 4:30 
1  
Excelent explaination C  Seraph Apr 9 at 16:02
show 2 more comments


up vote
65
down vote
Actually Wikipedia has a great article that shows every RB-Tree can easily be expressed as a B-Tree. Take the following tree as sample:


RB-Tree


now just convert it to a B-Tree (to make this more obvious, nodes are still colored R/B, what you usually don't have in a B-Tree):


Same Tree as B-Tree


(cannot add the image here for some weird reason)


Same is true for any other RB-Tree. It's taken from this article:


http://en.wikipedia.org/wiki/Red-black_tree


To quote from this article:


The red-black tree is then structurally equivalent to a B-tree of order 4, with a minimum fill factor of 33% of values per cluster with a maximum capacity of 3 values.


I found no data that one of both is significantly better than the other one. I guess one of both had already died out if that was the case. They are different regarding how much data they must store in memory and how complicated it is to add/remove nodes from the tree.


Update:


My personal tests suggest that B-Trees are better when searching for data, as they have better data locality and thus the CPU cache can do compares somewhat faster. The higher the order of a B-Tree (the order is the number of children a note can have), the faster the lookup will get. On the other hand, they have worse performance for adding and removing new entries the higher their order is. This is caused by the fact that adding a value within a node has linear complexity. As each node is a sorted array, you must move lots of elements around within that array when adding an element into the middle: all elements to the left of the new element must be moved one position to the left or all elements to the right of the new element must be moved one position to the right. If a value moves one node upwards during an insert (which happens frequently in a B-Tree), it leaves a hole which must be also be filled either by moving all elements from the left one position to the right or by moving all elements to the right one position to the left. These operations (in C usually performed by memmove) are in fact O(n). So the higher the order of the B-Tree, the faster the lookup but the slower the modification. On the other hand if you choose the order too low (e.g. 3), a B-Tree shows little advantages or disadvantages over other tree structures in practice (in such a case you can as well use something else). Thus I'd always create B-Trees with high orders (at least 4, 8 and up is fine).


File systems, which often base on B-Trees, use much higher orders (order 200 and even a lot more) - this is because they usually choose the order high enough so that a note (when containing maximum number of allowed elements) equals either the size of a sector on harddrive or of a cluster of the filesystem. This gives optimal performance (since a HD can only write a full sector at a time, even when just one byte is changed, the full sector is rewritten anyway) and optimal space utilization (as each data entry on drive equals at least the size of one cluster or is a multiple of the cluster sizes, no matter how big the data really is). Caused by the fact that the hardware sees data as sectors and the file system groups sectors to clusters, B-Trees can yield much better performance and space utilization for file systems than any other tree structure can; that's why they are so popular for file systems.


When your app is constantly updating the tree, adding or removing values from it, a RB-Tree or an AVL-Tree may show better performance on average compared to a B-Tree with high order. Somewhat worse for the lookups and they might also need more memory, but therefor modifications are usually fast. Actually RB-Trees are even faster for modifications than AVL-Trees, therefor AVL-Trees are a little bit faster for lookups as they are usually less deep.


So as usual it depends a lot what your app is doing. My recommendations are:


Lots of lookups, little modifications: B-Tree (with high order)
Lots of lookups, lots of modifiations: AVL-Tree
Little lookups, lots of modifications: RB-Tree
An alternative to all these trees are AA-Trees. As this PDF paper suggests, AA-Trees (which are in fact a sub-group of RB-Trees) are almost equal in performance to normal RB-Trees, but they are much easier to implement than RB-Trees, AVL-Trees, or B-Trees. Here is a full implementation, look how tiny it is (the main-function is not part of the implementation and half of the implementation lines are actually comments).


As the PDF paper shows, a Treap is also an interesting alternative to classic tree implementation. A Treap is also a binary tree, but one that doesn't try to enforce balancing. To avoid worst case scenarios that you may get in unbalanced binary trees (causing lookups to become O(n) instead of O(log n)), a Treap adds some randomness to the tree. Randomness cannot guarantee that the tree is well balanced, but it also makes it highly unlikely that the tree is extremely unbalanced.


share|improve this answer
edited Oct 13 '11 at 12:19


answered Jul 28 '09 at 16:39


Mecki
34.8k1583126
3  
I wish I could give +10 to this answer, it's the best here (after the update, of course). C  Lus Guilherme Jan 6 '10 at 15:12
1  
this really is the best answer. C  deft_code Feb 4 '10 at 17:23
1  
This answer is insane-ly good. Definitely the best answer. C  fthinker Mar 2 '12 at 7:40
1  
Really good update C  Siddhartha Oct 27 '12 at 10:20
   
What I am missing in most places is a discussion of key size, i.e. as the keys increase in size, up to perhaps the size of a cache line, you'd expect the caching advantage to balance out. I see a lot of benchmarks that rely on inserting integers or short strings, which is often, but not always, how they are used in practice. C  wds Apr 2 at 8:01
add comment
up vote
24
down vote
Nothing prevents a B-Tree implementation that works only in memory. In fact, if key comparisons are cheap, in-memory B-Tree can be faster because its packing of multiple keys in one node will cause less cache misses during searches. See this link for performance comparisons. A quote: "The speed test results are interesting and show the B+ tree to be significantly faster for trees containing more than 16,000 items." (B+Tree is just a variation on B-Tree).


share|improve this answer
edited Mar 15 '09 at 10:53


starblue
29.4k64797
answered Mar 15 '09 at 10:42


zvrba
13.9k22547
   
This goes directly into my bookmarks folder. C  Konrad Rudolph Jun 18 '09 at 12:19
1  
answer is kinda weak, but the link is golden. C  deft_code Feb 4 '10 at 17:26
add comment
up vote
4
down vote
The question is old but I think it is still relevant. Jonas K?lker and Mecki gave very good answers but I don't think the answers cover the whole story. I would even argue that the whole discussion is missing the point :-).


What was said about B-Trees is true when entries are relatively small (integers, small strings/words, floats, etc). When entries are large (over 100B) the differences become smaller/insignificant.


Let me sum up the main points about B-Trees:


They are faster than any Binary Search Tree (BSTs) due to memory locality (resulting in less cache and TLB misses).


B-Trees are usually more space efficient if entries are relatively small or if entries are of variable size. Free space management is easier (you allocate larger chunks of memory) and the extra metadata overhead per entry is lower. B-Trees will waste some space as nodes are not always full, however, they still end up being more compact that Binary Search Trees.


The big O performance ( O(logN) ) is the same for both. Moreover, if you do binary search inside each B-Tree node, you will even end up with the same number of comparisons as in a BST (it is a nice math exercise to verify this). If the B-Tree node size is sensible (1-4x cache line size), linear searching inside each node is still faster because of the hardware prefetching. You can also use SIMD instructions for comparing basic data types (e.g. integers).


B-Trees are better suited for compression: there is more data per node to compress. In certain cases this can be a huge benefit. Just think of an auto-incrementing key in a relational database table that is used to build an index. The lead nodes of a B-Tree contain consecutive integers that compress very, very well.


B-Trees are clearly much, much faster when stored on secondary storage (where you need to do block IO).


On paper, B-Trees have a lot of advantages and close to no disadvantages. So should one just use B-Trees for best performance?


The answer is usually NO -- if the tree fits in memory. In cases where performance is crucial you want a thread-safe tree-like data-structure (simply put, several threads can do more work than a single one). It is more problematic to make a B-Tree support concurrent accesses than to make a BST. The most straight-forward way to make a tree support concurrent accesses is to lock nodes as you are traversing/modifying them. In a B-Tree you lock more entries per node, resulting in more serialization points and more contended locks.


All tree versions (AVL, Red/Black, B-Tree, an others) have countless variants that differ in how they support concurrency. The vanilla algorithms that are taught in a university course or read from some introductory books are almost never used in practice. So, it is hard to say which tree performs best as there is no official agreement on the exact algorithms are behind each tree. I would suggest to think of the trees mentioned more like data-structure classes that obey certain tree-like invariants rather than precise data-structures.


Take for example the B-Tree. The vanilla B-Tree is almost never used in practice -- you cannot make it to scale well! The most common B-Tree variant used is the B+-Tree (widely used in file-systems, databases). The main differences between the B+-Tree and the B-Tree: 1) you dont store entries in the inner nodes of the tree (thus you don't need write locks high in the tree when modifying an entry stored in an inner node); 2) you have links between nodes at the same level (thus you do not have to lock the parent of a node when doing range searches).


I hope this helps.


share|improve this answer
answered Feb 13 '13 at 1:02


user2066248
411
add comment
up vote
3
down vote
Guys from Google recently released their implementation of STL containers, which is based on B-trees. They claim their version is faster and consume less memory, than standart STL containers, implemented via red-black trees. More details here


share|improve this answer
edited Jan 14 at 10:49


answered Feb 14 '13 at 7:53


Koka Chernov
808619
add comment
up vote
2
down vote
For some applications, B-trees are significantly faster than BSTs. The trees you may find here:


http://freshmeat.net/projects/bps


are quite fast. They also use less memory than regular BST implementations, since they do not require the BST infrastructure of 2 or 3 pointers per node, plus some extra fields to keep the balancing information.


share|improve this answer
answered Jul 19 '11 at 14:21


Ciprian Niculescu
211
add comment
up vote
0
down vote
THey are sed in different circumstances - B-trees are used when the tree nodes need to be kept together in storage - typically because storage is a disk page and so re-balancing could be vey expensive. RB trees are used when you don't have this constraint. So B-trees will probably be faster if you want to implement (say) a relational database index, while RB trees will probably be fasterv for (say) an in memory search.


share|improve this answer
edited Mar 15 '09 at 9:40


answered Mar 15 '09 at 9:29
anon
2  
RB trees will not be faster for in memory search. That time is gone C  Stephan Eggermont Jul 2 '09 at 14:49
add comment
up vote
0
down vote
They all have the same asymptotic behavior, so the performance depends more on the implementation than which type of tree you are using. Some combination of tree structures might actually be the fastest approach, where each node of a B-tree fits exactly into a cache-line and some sort of binary tree is used to search within each node. Managing the memory for the nodes yourself might also enable you to achieve even greater cache locality, but at a very high price.


Personally, I just use whatever is in the standard library for the language I am using, since it's a lot of work for a very small performance gain (if any).


On a theoretical note... RB-trees are actually very similar to B-trees, since they simulate the behavior of 2-3-4 trees. AA-trees are a similar structure, which simulates 2-3 trees instead.


share|improve this answer
answered Jun 18 '09 at 11:58



stackoverflow.com/questions/647537/b-tree-faster-than-avl-or-redblack-tree

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值