Science and technology

7 helpful bcc/BPF efficiency evaluation instruments for Linux

A brand new expertise has arrived in Linux that may present sysadmins and builders with numerous new instruments and dashboards for efficiency evaluation and troubleshooting. It’s referred to as the improved Berkeley Packet Filter (eBPF, or simply BPF), though these enhancements weren’t developed in Berkeley, they function on rather more than simply packets, they usually do rather more than simply filtering. I am going to talk about a technique to make use of BPF on the Fedora and Red Hat household of Linux distributions, demonstrating on Fedora 26.

BPF can run user-defined sandboxed packages within the kernel so as to add new customized capabilities immediately. It’s like including superpowers to Linux, on demand. Examples of what you should utilize it for embrace:

  • Advanced efficiency tracing instruments: programmatic low-overhead instrumentation of filesystem operations, TCP occasions, user-level occasions, and many others.
  • Network efficiency: dropping packets early on to enhance DDOS resilience, or redirecting packets in-kernel to enhance efficiency
  • Security monitoring: 24×7 customized monitoring and logging of suspicious kernel and userspace occasions

BPF packages should cross an in-kernel verifier to make sure they’re protected to run, making it a safer choice, the place attainable, than writing customized kernel modules. I believe most individuals will not write BPF packages themselves, however will use different individuals’s. I’ve revealed many on GitHub as open supply within the BPF Complier Collection (bcc) mission. bcc offers totally different frontends for BPF growth, together with Python and Lua, and is at the moment essentially the most energetic mission for BPF tooling.

7 helpful new bcc/BPF instruments

To perceive the bcc/BPF instruments and what they instrument, I created the next diagram and added it to the bcc mission:

These are command-line interface (CLI) instruments you should utilize over SSH (safe shell). Much evaluation these days, together with at my employer, is carried out utilizing GUIs and dashboards. SSH is a final resort. But these CLI instruments are nonetheless a great way to preview BPF capabilities, even for those who finally intend to use them solely by means of a GUI when obtainable. I’ve started including BPF capabilities to an open supply GUI, however that is a subject for one more article. Right now I might prefer to share the CLI instruments, which you should utilize at present.

1. execsnoop

Where to begin? How about watching new processes. These can devour system sources, however be so short-lived they do not present up in high(1) or different instruments. They will be instrumented (or, utilizing the trade jargon for this, they are often traced) utilizing execsnoop. While tracing, I am going to log in over SSH in one other window:

# /usr/share/bcc/instruments/execsnoop
PCOMM            PID    PPID   RET ARGS
sshd             12234  727      Zero /usr/sbin/sshd -D -R
unix_chkpwd      12236  12234    Zero /usr/sbin/unix_chkpwd root nonull
unix_chkpwd      12237  12234    Zero /usr/sbin/unix_chkpwd root chkexpiry
bash             12239  12238    Zero /bin/bash
id               12241  12240    Zero /usr/bin/id -un
hostname         12243  12242    Zero /usr/bin/hostname
pkg-config       12245  12244    Zero /usr/bin/pkg-config --variable=completionsdir bash-completion
grepconf.sh      12246  12239    Zero /usr/libexec/grepconf.sh -c
grep             12247  12246    Zero /usr/bin/grep -qsi ^COLOR.*none /and many others/GREP_COLORS
tty              12249  12248    Zero /usr/bin/tty -s
tput             12250  12248    Zero /usr/bin/tput colours
dircolors        12252  12251    Zero /usr/bin/dircolors --sh /and many others/DIR_COLORS
grep             12253  12239    Zero /usr/bin/grep -qi ^COLOR.*none /and many others/DIR_COLORS
grepconf.sh      12254  12239    Zero /usr/libexec/grepconf.sh -c
grep             12255  12254    Zero /usr/bin/grep -qsi ^COLOR.*none /and many others/GREP_COLORS
grepconf.sh      12256  12239    Zero /usr/libexec/grepconf.sh -c
grep             12257  12256    Zero /usr/bin/grep -qsi ^COLOR.*none /and many others/GREP_COLORS

Wow. What is all that? What is grepconf.sh? What is /and many others/GREP_COLORS? And is grep actually studying its personal configuration file … by working grep? How does that even work?

Welcome to the enjoyable of system tracing. You can study loads about how the system is absolutely working (or not working, because the case could also be) and uncover some straightforward optimizations alongside the best way. execsnoop works by tracing the exec() system name, which is normally used to load totally different program code in new processes.

2. opensnoop

Continuing from above, so, grepconf.sh is probably going a shell script, proper? I am going to run file(1) to test, and in addition use the opensnoop bcc device to see what file is opening:

# /usr/share/bcc/instruments/opensnoop
PID    COMM               FD ERR PATH
12420  file                three   Zero /and many others/ld.so.cache
12420  file                three   Zero /lib64/libmagic.so.1
12420  file                three   Zero /lib64/libz.so.1
12420  file                three   Zero /lib64/libc.so.6
12420  file                three   Zero /usr/lib/locale/locale-archive
12420  file               -1   2 /and many others/magic.mgc
12420  file                three   Zero /and many others/magic
12420  file                three   Zero /usr/share/misc/magic.mgc
12420  file                three   Zero /usr/lib64/gconv/gconv-modules.cache
12420  file                three   Zero /usr/libexec/grepconf.sh
1      systemd            16   Zero /proc/565/cgroup
1      systemd            16   Zero /proc/536/cgroup

Tools like execsnoop and opensnoop print out one line per occasion. This exhibits the recordsdata that file(1) is opening (or making an attempt to): The returned file descriptor (“FD” column) is -1 for /and many others/magic.mgc, and the “ERR” column signifies it’s “file not found.” I did not find out about that file, nor the /usr/share/misc/magic.mgc that file(1) is studying. I should not be stunned, however file(1) has no downside figuring out the file sorts:

# file /usr/share/misc/magic.mgc /and many others/magic
/usr/share/misc/magic.mgc: magic binary file for file(1) cmd (model 14) (little endian)
/and many others/magic:                magic textual content file for file(1) cmd, ASCII textual content

opensnoop works by tracing the open() syscall. Why not simply use strace -feopen file …? That would work on this case. A few benefits of opensnoop, nonetheless, are that it really works system-wide, and tracing open() calls throughout all processes. Notice that the above output included opens from systemd. Opensnoop additionally ought to have a lot decrease overhead: BPF tracing has been optimized, and the present model of strace(1) nonetheless makes use of the older and slower ptrace(2) interface.

three. xfsslower

bcc/BPF can analyze rather more than simply syscalls. The xfsslower device traces frequent XFS filesystem operations which have a latency of larger than 1 millisecond (the argument):

# /usr/share/bcc/instruments/xfsslower 1
Tracing XFS operations slower than 1 ms
TIME     COMM           PID    T BYTES   OFF_KB   LAT(ms) FILENAME
14:17:34 systemd-journa 530    S Zero       Zero           1.69 system.journal
14:17:35 auditd         651    S Zero       Zero           2.43 audit.log
14:17:42 cksum          4167   R 52976   Zero           1.04 at
14:17:45 cksum          4168   R 53264   Zero           1.62 [
14:17:45 cksum          4168   R 65536   Zero           1.01 certutil
14:17:45 cksum          4168   R 65536   Zero           1.01 dir
14:17:45 cksum          4168   R 65536   Zero           1.17 dirmngr-client
14:17:46 cksum          4168   R 65536   Zero           1.06 grub2-file
14:17:46 cksum          4168   R 65536   128         1.01 grub2-fstest
[...]

In the output above, I caught many cksum(1) reads (“T” for sort == “R”) with over 1 millisecond latency. This works by dynamically instrumenting kernel capabilities in XFS whereas the xfsslower device is working, and it undoes that instrumentation when it ends. There are variations of this bcc device for different filesystems as nicely: ext4slower, btrfsslower, zfsslower, and nfsslower.

This is a great tool and an essential instance of BPF tracing. Traditional evaluation of filesystem efficiency focuses on block I/O statistics—what you generally see printed by the iostat(1) device and plotted by many performance-monitoring GUIs. Those statistics present how the disks are performing, however probably not the filesystem. Often you care extra in regards to the filesystem’s efficiency than the disks, since it is the filesystem that functions make requests to and look forward to. And the efficiency of filesystems will be fairly totally different from that of disks! Filesystems might serve reads solely from reminiscence cache and in addition populate that cache through a read-ahead algorithm and for write-back caching. xfsslower exhibits filesystem efficiency—what the functions immediately expertise. This is usually helpful for exonerating your entire storage subsystem; if there may be actually no filesystem latency, then efficiency points are more likely to be elsewhere.

Four. biolatency

Although filesystem efficiency is essential to review for understanding utility efficiency, finding out disk efficiency has advantage as nicely. Poor disk efficiency will have an effect on the applying finally, when varied caching methods can not cover its latency. Disk efficiency can also be a goal of research for capability planning.

The iostat(1) device exhibits the common disk I/O latency, however averages will be deceptive. It will be helpful to review the distribution of I/O latency as a histogram, which will be carried out utilizing biolatency:

# /usr/share/bcc/instruments/biolatency
Tracing block gadget I/O... Hit Ctrl-C to finish.
^C
     usecs               : depend     distribution
         Zero -> 1          : Zero        |                                        |
         2 -> three          : Zero        |                                        |
         Four -> 7          : Zero        |                                        |
         Eight -> 15         : Zero        |                                        |
        16 -> 31         : Zero        |                                        |
        32 -> 63         : 1        |                                        |
        64 -> 127        : 63       |****                                    |
       128 -> 255        : 121      |*********                               |
       256 -> 511        : 483      |************************************    |
       512 -> 1023       : 532      |****************************************|
      1024 -> 2047       : 117      |********                                |
      2048 -> 4095       : Eight        |                                        |

This is one other useful gizmo and one other helpful instance; it makes use of a BPF characteristic referred to as maps, which can be utilized to implement environment friendly in-kernel abstract statistics. The switch of information from the kernel degree to the consumer degree is merely the “count” column; the user-level program generates the remainder.

It’s price noting that many of those instruments help CLI choices and arguments as proven by their USAGE message:

# /usr/share/bcc/instruments/biolatency -h
utilization: biolatency [-h] [-T] [-Q] [-m] [-D] [interval] [depend]

Summarize block gadget I/O latency as a histogram

positional arguments:
  interval            output interval, in seconds
  depend               variety of outputs

elective arguments:
  -h, --help          present this assist message and exit
  -T, --timestamp     embrace timestamp on output
  -Q, --queued        embrace OS queued time in I/O time
  -m, --milliseconds  millisecond histogram
  -D, --disks         print a histogram per disk gadget

examples:
    ./biolatency            # summarize block I/O latency as a histogram
    ./biolatency 1 10       # print 1 second summaries, 10 occasions
    ./biolatency -mT 1      # 1s summaries, milliseconds, and timestamps
    ./biolatency -Q         # embrace OS queued time in I/O time
    ./biolatency -D         # present every disk gadget individually

That they behave like different Unix instruments is by design, to assist adoption.

5. tcplife

Another useful gizmo and instance, this time displaying lifespan and throughput statistics of TCP classes, is tcplife:

# /usr/share/bcc/instruments/tcplife
PID   COMM       LADDR           LPORT RADDR           RPORT TX_KB RX_KB MS
12759 sshd       192.168.56.101  22    192.168.56.1    60639     2     three 1863.82
12783 sshd       192.168.56.101  22    192.168.56.1    60640     three     three 9174.53
12844 wget       10.Zero.2.15       34250 54.204.39.132   443      11  1870 5712.26
12851 curl       10.Zero.2.15       34252 54.204.39.132   443       Zero    74 505.90

Before you say: “Can’t I simply scrape tcpdump(Eight) output for this?” be aware that working tcpdump(Eight), or any packet sniffer, can price noticable overhead on excessive packet-rate methods, though the user- and kernel-level mechanics of tcpdump(Eight) have been optimized over time (it could possibly be a lot worse). tcplife would not instrument each packet; it solely watches TCP session state adjustments for effectivity, and, from that, it occasions the period of a session. It additionally makes use of kernel counters that already observe throughput, in addition to course of and command data (“PID” and “COMM” columns), which aren’t obtainable to on-the-wire-sniffing instruments like tcpdump(Eight).

6. gethostlatency

Every earlier instance entails kernel tracing, so I want at the least one user-level tracing instance. Here is gethostlatency, which devices gethostbyname(three) and associated library requires identify decision:

# /usr/share/bcc/instruments/gethostlatency
TIME      PID    COMM                  LATms HOST
06:43:33  12903  curl                 188.98 opensource.com
06:43:36  12905  curl                   Eight.45 opensource.com
06:43:40  12907  curl                   6.55 opensource.com
06:43:44  12911  curl                   9.67 opensource.com
06:45:02  12948  curl                  19.66 opensource.cats
06:45:06  12950  curl                  18.37 opensource.cats
06:45:07  12952  curl                  13.64 opensource.cats
06:45:19  13139  curl                  13.10 opensource.cats

Yes, it is all the time DNS, so having a device to observe DNS requests system-wide will be useful (this solely works if functions use the usual system library). See how I traced a number of lookups to “opensource.com”? The first took 188.98 milliseconds, after which it was a lot quicker, lower than 10 milliseconds, little doubt cached. It additionally traced a number of lookups to “opensource.cats,” a number that sadly would not exist, however we are able to nonetheless study the latency of the primary and subsequent lookups. (Is there a little bit negative-caching after the second lookup?)

7. hint

Okay, yet one more instance. The trace device was contributed by Sasha Goldshtein and offers some primary printf(1) performance with customized probes. For instance:

# /usr/share/bcc/instruments/hint 'pam:pam_start "%s: %s", arg1, arg2'
PID    TID    COMM         FUNC             -
13266  13266  sshd         pam_start        sshd: root

Here I am tracing libpam and its pam_start(three) perform and printing each of its arguments as strings. Libpam is for the pluggable authentication modules system, and the output exhibits that sshd referred to as pam_start() for the “root” consumer (I logged in). There are extra examples within the USAGE message (“trace -h”), plus, all of those instruments have man pages and examples recordsdata within the bcc repository; e.g., trace_example.txt and trace.8.

Install bcc through packages

The finest solution to set up bcc is from an iovisor repository, following the directions from the bcc INSTALL.md. IO Visor is the Linux Foundation mission that features bcc. The BPF enhancements these instruments use have been added within the Four.x sequence Linux kernels, as much as Four.9. This implies that Fedora 25, with its Four.Eight kernel, can run most of those instruments; and Fedora 26, with its Four.11 kernel, can run all of them (at the least at the moment).

If you might be on Fedora 25 (or Fedora 26, and this submit was revealed many months in the past—good day from the distant previous!), then this bundle strategy ought to simply work. If you might be on Fedora 26, then skip to the Install via Source part, which avoids a known and fixed bug. That bug repair hasn’t made its manner into the Fedora 26 bundle dependencies in the intervening time. The system I am utilizing is:

# uname -a
Linux localhost.localdomain Four.11.Eight-300.fc26.x86_64 #1 SMP Thu Jun 29 20:09:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
# cat /and many others/fedora-release
Fedora launch 26 (Twenty Six)

Here are the set up steps I adopted, however please consult with INSTALL.md for up to date variations:

# echo -e '[iovisor]nbaseurl=https://repo.iovisor.org/yum/nightly/f25/$basearchnenabled=1ngpgcheck=0' | sudo tee /and many others/yum.repos.d/iovisor.repo
# dnf set up bcc-tools
[...]
Total obtain dimension: 37 M
Installed dimension: 143 M
Is this okay [y/N]: y

After set up, you need to see new instruments in /usr/share:

# ls /usr/share/bcc/instruments/
argdist       dcsnoop              killsnoop       softirqs    hint
bashreadline  dcstat               llcstat         solisten    ttysnoop
[...]

Let’s attempt working considered one of them:

# /usr/share/bcc/instruments/opensnoop
chdir(/lib/modules/Four.11.Eight-300.fc26.x86_64/construct): No such file or listing
Traceback (most up-to-date name final):
  File "/usr/share/bcc/tools/opensnoop", line 126, in <module>
    b = BPF(textual content=bpf_text)
  File "/usr/lib/python3.6/site-packages/bcc/__init__.py", line 284, in __init__
    elevate Exception("Failed to compile BPF module %s" % src_file)
Exception: Failed to compile BPF module

It did not run, complaining that /lib/modules/Four.11.Eight-300.fc26.x86_64/construct was lacking. If you hit this too, it is simply because the system is lacking kernel headers. If you take a look at what that file factors to (it is a symlink), then seek for it utilizing “dnf whatprovides,” it will let you know the bundle it’s worthwhile to set up subsequent. For this technique, it’s:

# dnf set up kernel-devel-Four.11.Eight-300.fc26.x86_64
[...]
Total obtain dimension: 20 M
Installed dimension: 63 M
Is this okay [y/N]: y
[...]

And now:

# /usr/share/bcc/instruments/opensnoop
PID    COMM               FD ERR PATH
11792  ls                  three   Zero /and many others/ld.so.cache
11792  ls                  three   Zero /lib64/libselinux.so.1
11792  ls                  three   Zero /lib64/libcap.so.2
11792  ls                  three   Zero /lib64/libc.so.6
[...]

It works. That’s catching exercise from an ls command in one other window. See the sooner part for different helpful instructions.

Install through supply

If it’s worthwhile to set up from supply, you may also discover documentation and up to date directions in INSTALL.md. I did the next on Fedora 26:

sudo dnf set up -y bison cmake ethtool flex git iperf libstdc++-static
  python-netaddr python-pip gcc gcc-c++ make zlib-devel
  elfutils-libelf-devel
sudo dnf set up -y luajit luajit-devel  # for Lua help
sudo dnf set up -y
  http://pkgs.repoforge.org/netperf/netperf-2.6.Zero-1.el6.rf.x86_64.rpm
sudo pip set up pyroute2
sudo dnf set up -y clang clang-devel llvm llvm-devel llvm-static ncurses-devel

Everything put in for me aside from netperf, which had the next error:

Curl error (28): Timeout was reached for http://pkgs.repoforge.org/netperf/netperf-2.6.Zero-1.el6.rf.x86_64.rpm [Connection timed out after 120002 milliseconds]

We can ignore this error, as a result of netperf is elective—it is simply used for exams—and bcc will compile with out it.

Here are the remaining bcc compilation and set up steps:

git clone https://github.com/iovisor/bcc.git
mkdir bcc/construct; cd bcc/construct
cmake .. -DCMAKE_INSTALL_PREFIX=/usr
make
sudo make set up

At this level, instructions ought to work:

# /usr/share/bcc/instruments/opensnoop
PID    COMM               FD ERR PATH
4131   date                three   Zero /and many others/ld.so.cache
4131   date                three   Zero /lib64/libc.so.6
4131   date                three   Zero /usr/lib/locale/locale-archive
4131   date                three   Zero /and many others/localtime
[...]

Final phrases and different frontends

This was a fast tour of the brand new BPF efficiency evaluation superpowers that you should utilize on the Fedora and Red Hat household of working methods. I demonstrated the favored bcc frontend to BPF and included set up directions for Fedora. bcc comes with greater than 60 new instruments for efficiency evaluation, which is able to enable you get essentially the most out of your Linux methods. Perhaps you’ll use these instruments immediately over SSH, or maybe you’ll use the identical performance through monitoring GUIs as soon as they help BPF.

Also, bcc shouldn’t be the one frontend in growth. There are ply and bpftrace, which purpose to offer higher-level language for rapidly writing customized instruments. In addition, SystemTap simply launched version 3.2, together with an early, experimental eBPF backend. Should this proceed to be developed, it would present a production-safe and environment friendly engine for working the numerous SystemTap scripts and tapsets (libraries) which were developed over time. (Using SystemTap with eBPF could be good subject for one more submit.)

If it’s worthwhile to develop customized instruments, you are able to do that with bcc as nicely, though the language is at the moment rather more verbose than SystemTap, ply, or bpftrace. My bcc instruments can function code examples, plus I contributed a tutorial for growing bcc instruments in Python. I might advocate studying the bcc multi-tools first, as you might get quite a lot of mileage from them earlier than needing to write down new instruments. You can research the multi-tools from their instance recordsdata within the bcc repository: funccount, funclatency, funcslower, stackcount, trace, and argdist.

Thanks to Opensource.com for edits.

Most Popular

To Top