After giving another talk on ftrace, the official tracer of the Linux kernel, I thought it might be useful to offer a quick primer for those that are still unfamiliar with it. Ftrace was added to the Linux kernel back in 2008, but a lot of people still don’t quite get what it is, or what it can empower them to do.
To put it very simply: ftrace is a Linux kernel feature that lets you trace Linux kernel function calls. Essentially, it lets you look into the Linux kernel and see what it’s doing. Why would you want to do that? Well, most of the time you wouldn’t. But when you do, it is an essential ability to have!
While a task is in user space, it has no visibility into the happenings of the kernel. The kernel runs in a privileged mode that won’t let you see what’s going on inside. And we want it to be that way – Linux would open itself to major security risks if we made the kernel’s workings visible to everyone all the time.
But when you run into problems, you really do need a window into the kernel and that’s what tracing gives you. Say you write a device driver, debug it, and confirm that everything’s fine. Then you run it inside the kernel and find that it’s not working as you expected, or something else stalls or breaks when you run it. That’s when you need to see what is happening at the kernel level.
Ftrace actually stands for “function trace” and its ability to trace functions is what first made the tool popular. Essentially, ftrace built around smart lockless ring buffer implementation, and that buffer stores all ftrace info. It allows you to see all, or a selection of, functions within the kernel and watch the flow of their execution. You can see this live, although that’s usually impossible to follow, or you may record them into a file, and examine the flow of execution at a later time.
This is incredibly useful because it can tell you why your code isn’t working. Maybe the code stalled because it was blocking on locks for a reason you were not previously aware of. Perhaps it triggered a soft interrupt that you were not expecting. A properly composed ftrace query will let you figure out exactly what’s going on, and in that sense, it’s a valuable debugging tool.
Ftrace has two meanings: one is specific for the infrastructure of the function hooks, and the other is the tracing framework based on the tracefs file system interface. Within the framework, there are some key files, including current tracer (which sets and displays the current tracer that is configured), available tracers (which holds different types of tracers compiled into the kernel), and trace (which holds the output of the trace in a human readable format).
Other tools built on the function hooks include the function graph tracer (which traces not only the function entry, but also the return of the function allowing you to create a call graph of the function flow), the stack tracer (where it is possible to see which function is taking up the most stack), kprobes (dynamic events) and even live kernel patching!
The rest of the ftrace tool framework is much more involved. There are ways to trace latency. The ftrace infrastructure created the static trace events which even perf utilizes, in which there are over a 1,000 in the current Linux kernel and more being added every day. The static trace events cover various events within the kernel that the maintainers feel are important. Examples include when tasks are scheduled, interrupts are triggered, networking packets occur, and much more. The trace events contain all the details that the maintainers can use to debug a system that is in production, by recording the events as issues occur and analyzing the data offline.
The trace events have an entire infrastructure themselves. This includes simple histograms, triggering stack traces, starting and stopping tracing, and enabling or disabling other trace events. There are even synthetic events that can be created by associating two events with a common field (like a wake up event and when the task woken up gets scheduled, covered by the scheduler event). Using the synthetic events you can trace and record custom latencies.
Finally, ftrace is a great tool simply for learning. The Linux kernel is now so vast that none of us can possibly know every corner of it (not even Linus Torvalds himself). If I have to learn a new subsystem, I enable ftrace, perform some work, and then analyze the tracing data to understand how that subsystem works. I wish I had ftrace when I first started kernel development.
If you want to explore ftrace for the first time, the official ftrace documentation is a good place to start. Ftrace is both simple and flexible to use. You can implement it with just echo and cat commands. Basically, ftrace has always been designed to work with busybox or toybox, without any addon utilities.
Alex Dzyoba wrote a useful introduction to ftrace a few years back, and more recently Andreas Christoforou wrote the valuable “Kernel Tracing with Ftrace.” I’d also recommend Julia Evans’ blog post “ftrace: trace your kernel functions!” She discusses the difficulty of running ftrace like this:
echo function > current_tracer
echo do page fault > set_ftrace_filter
And then explores a more user friendly interface called trace-cmd. It’s a phenomenal place to start your ftrace journey and is accompanied well by Brendan Gregg’s “ftrace: The Hidden Light Switch.”
If you are an audio-visual learner (careful, I can speak quite fast), you can check out my Kernel Recipes talk on Understanding the Linux Kernel via Ftrace, or my ftrace tutorial at linux.conf.au 2019 titled “See what your computer is doing with Ftrace utilities.”
Lastly, a talk I gave in Sofia last year offers a perspective on using ftrace to learn about the Linux kernel.