Skip to content

Destructing trees safely and cheaply

Rust, Performance2 min read

Trees in the wild can be arbitrarily deep, while this is a non-problem in garbage collected languages such as Java and Go, not every application can afford GC pauses.

Garbage collection is known to increase long tail latencies, which in turn can deteriorate user experience.

This leads us to manually managed memory languages such as C++ and Rust.
Rust's memory management strategy based on ownership is better suited to express this problem so we'll use it for the remainder of this article.

Let's start with a degenerate version of binary trees: linked lists!
They can be seen as a binary tree with only left (or only right) nodes.

1pub struct LinkedList<T> {
2 // `Box` is Rust's type to represent heap allocated objects
3 root: Option<Box<ListNode<T>>>,
6struct ListNode<T> {
7 datum: T,
8 next: Option<Box<ListNode<T>>>,
11impl<T> LinkedList<T> {
12 pub fn push_front(&mut self, datum: T) {
13 self.root = Some(Box::new(ListNode {
14 datum,
15 next: self.root.take(),
16 }));
17 }
20fn main() {
21 let length = std::env::args()
22 .skip(1)
23 .next()
24 .expect("linked list length missing")
25 .parse()
26 .expect("invalid unsigned integer");
28 println!("creating a linked list of length {}", length);
29 let mut linked_list = LinkedList { root: None };
30 for i in 0..length {
31 linked_list.push_front(i);
32 }
33 println!("linked list created");

The main function is pretty simple, it reads the first argument and creates a linked list of that length.

Let's try it out with a parameter of one million.

1>> cargo run --release 1000000
2creating a linked list of length 1000000
3linked list created
5thread 'main' has overflowed its stack
6fatal runtime error: stack overflow

Uh oh.
What happened?

To ensure that the linked list's memory is reclaimed, a destructor is run once the struct goes out of scope, all of its fields are dropped recursively (same behavior as C++ smart pointers).

The root destructor calls the next node destructor ..., the call stack keeps growing, in fact as much as the linked list length, but the main thread stack size is fixed. stack overflow 💥

1LinkedList { // vvv n-2 drop call, ^^^ dropped third
2 datum: 2,
3 next: Some(LinkedList { // vvv n-1 drop call, ^^^ dropped second
4 datum: 1,
5 next: Some(LinkedList { // vvv nth drop call, ^^^ dropped first
6 datum: 0,
7 next: None,
8 }),
9 }),
10 };

Is there a way to fix this?

Yes! by using tail recursion.
A function calling itself recursively as its last action can have its stack frame safely removed and replaced.

Let's write a custom destructor to see if it works!

1impl<T> Drop for ListNode<T> {
2 fn drop(&mut self) {
3 if let Some(node) = {
4 let owned_node = *node;
5 drop(owned_node); // for good measure
6 }
7 }
1>> cargo run --release 1000000
2creating a linked list of length 1000000
3linked list created
5thread 'main' has overflowed its stack
6fatal runtime error: stack overflow

Uhh, this is awkward.

Tail recursion isn't guaranteed in Rust, however, it's possible to emulate using a well crafted constant memory loop.

It is in fact what the standard library does for std::collections::LinkedList.

1while let Some(node) = self.pop_front_node() {
2 // ... stuff to handle panics
3 drop(node);


Now, let's get back to binary trees, it looks like this:

1type TreeNodeRef<T> = Option<Box<TreeNode<T>>>;
3pub struct Tree<T> {
4 root: TreeNodeRef<T>,
7struct TreeNode<T> {
8 datum: T,
9 left: TreeNodeRef<T>,
10 right: TreeNodeRef<T>,

Is there a way to prevent stack overflow during the destruction of a deep binary tree?

Since every node can have two childrens there isn't an obvious way to apply our previous trick...

Not only this is possible but it can be done in constant memory!

1impl<T> Drop for Tree<T> {
2 fn drop(&mut self) {
3 let Some(mut node) = self.root.take() else { return; };
4 loop {
5 let TreeNode { left, right, .. } = *node;
7 match (left, right) {
8 (Some(mut left), Some(right)) => {
9 let mut predecessor = &mut left;
10 while predecessor.right.is_some() {
11 predecessor = predecessor.right.as_mut().unwrap();
12 }
13 predecessor.right = Some(right);
14 node = left;
15 }
16 (Some(next), None) | (None, Some(next)) => {
17 node = next;
18 }
19 _ => break,
20 }
21 }
22 }

The idea is to restructure the binary tree into a linked list!

Whenever a node have two childrens, we transfer the ownership of the right child to the inorder predecessor present in the left child. This is free in term of memory as the inorder predecessor's right child is necessarily None (proof by contradiction) and takes as much space as a Box<_>.

Very cool, but what is the runtime cost of this?

Every edge (indirection) is visited at most twice, once to attain a node and a second time to search for the predecessor of another node. A binary tree of n nodes has n-1 edges so the runtime cost is at most 2n.

This algorithm can be seen as a faster, destructive variant of the preorder Morris traversal.
It is also generalizable to n-ary trees.