High Performance Linux

Friday, September 21, 2012

Linux: scaling softirq among many CPU cores

Some years ago I have tested network interrupts affinity - you set ~0 as a CPU mask to balance network interrupts among all your CPU cores and you get all softirq instances running in parallel. Such interrupts distribution among CPU cores sometimes is a bad idea due to CPU caches computational burden and probable packets reordering. In most cases it is not recommended for servers performing some TCP application (e.g. web server). However this ability is crucial for some low level packet applications like firewalls, routers or Anti-DDoS solutions (in last cases most of the packets must be dropped as quick as possible), which do a lot of work in softirq. So for some time I was thinking that there is no problem to share softirq load between CPU cores.

To get softirq sharing between CPU cores you just need to do

    $ for irq in `grep eth0 /proc/interrupts | cut -d: -f1`; do \
        echo ffffff > /proc/irq/$irq/smp_affinity; \

This makes (as I thought) your APIC to distribute interrupts between all your CPUs in round-robin fashion (or probably using some more cleaver technique). And this really was working in my tests.

Recently our client concerned about this ability, so I wrote very simple testing kernel module which just makes more work in softirq:

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>


 * Just eat some local CPU time and accept the packet.
static unsigned int
st_hook(unsigned int hooknum, struct sk_buff *skb,
        const struct net_device *in,
        const struct net_device *out,
        int (*okfn)(struct sk_buff *))
    unsigned int i;
    for (i = 0; i <= 1000 * 1000; ++i)

    return NF_ACCEPT;

static struct nf_hook_ops st_ip_ops[] __read_mostly = {
        .hook = st_hook,
        .owner = THIS_MODULE,
        .pf = PF_INET,
        .hooknum = NF_INET_PRE_ROUTING,
        .priority = NF_IP_PRI_FIRST,

static int __init
    if (nf_register_hooks(st_ip_ops, ARRAY_SIZE(st_ip_ops))) {
        printk(KERN_ERR "%s: can't register nf hook\n",
        return 1;
    printk(KERN_ERR "%s: loaded\n", __FILE__);

    return 0;

static void
    nf_unregister_hooks(st_ip_ops, ARRAY_SIZE(st_ip_ops));
    printk(KERN_ERR "%s: unloaded\n", __FILE__);


I loaded the system with iperf over 1Gbps channel. And I was very confused when see that only one CPU of 24-cores machine was doing whole the work and all other CPUs was doing nothing!

To understand what's going on lets have a look how Linux handles incoming packets and interrupts from network card (e.g. Intel 10 Gigabit PCI Express which is placed at drivers/net/ixgbe). Softirq works in per-cpu kernel threads, ksoftirqd (kernel/softirq.c: ksoftirqd()), i.e. if you have 4-cores machine, then you have 4 ksoftirqd threads (ksoftirqd/0, ksoftirqd/1, ksoftirqd/2 and ksoftirqd/3). ksoftirqd() calls do_softirq(), which by-turn calls __do_softirq(). The last one uses softirq_vec vector to get required hadler for current softirq type (e.g. NET_RX_SOFTIRQ for receiving or NET_TX_SOFTIRQ for sending softirqs correspondingly). The next step is to call virtual function action() for the handler. For NET_RX_SOFTIRQ net_rx_action() (net/core/dev.c) is called here. net_rx_action() reads napi_struct from per-cpu queue softnet_data and calls virtual function poll() - a NAPI callback (ixgbe_poll() in our case) which actually reads packets from the device ring queues. The driver processes interrupts in ixgbe_intr(). This function runs NAPI through call __napi_schedule(), which pushes current napi_struct to per-cpu softnet_data->poll_list, which net_rx_action() reads packets (on the same CPU) from. Thus softirq runs on the same core which received hardware interrupt.

This way theoretically if harware interrupts are going to N cores, then these and only these N cores are doing softirq. So I had a look at /proc/interrupts statistics and saw that only one 0th core is actually receiving interrupts from NIC while I set ~0 mask in smp_affinity for the interupt (actually I had MSI-X card, so I set the mask to all the interrupt vectors for the card).

I started googling for the answers why on earth interupts do not distribute among all the cores. The first topics which I found were nice articles by Alexander Sandler:

Following these articles not all hardware is actually able to spread interrupts between CPU cores. During my tests I was using IBM servers of particular model, but this is not the case of the client - they use very different hardware. This is why I saw one nice picture on my previous tests, but faced quite different behaviour on other hardware.

The good news is that linux 2.6.35 has introduced nice feature -  RPS (Receive Packet Steering). The core of the feature is get_rps_cpu() from dev/net/core.c, which computes a hash from IP source and destination addresses of an incoming packet and determines a which CPU send the packet to based on the hash. netif_receive_skb() or netif_rx() which call the function puts the packet to appropriate per-cpu queue for further processing by softirq. So there are two important consequences:
  1. packets are processed by different CPUs (with processing I mostly mean Netfilter pre-routing hooks);
  2. it is unlikely that packets belonging to the same TCP stream are reordered (packets reordering is a well-known problem for TCP performance, see for example Beyond softnet).
To enable the feature you should specify CPUs mask as following (the adapter from the example is connected via MSI-X and has 8 tx-rx queues, so we need to update masks for all the queues):

    $ for i in `seq 0 7`; do \
        echo fffffff > /sys/class/net/eth0/queues/rx-$i/rps_cpus ; \

After runnign linux-2.6.35 and setting all CPUs to be able to process softirq I got following nice picture in top:

  2238 root      20   0  411m  888  740 S  152  0.0   2:38.94 iperf
    10 root      20   0     0    0    0 R  100  0.0   0:35.44 ksoftirqd/2
    19 root      20   0     0    0    0 R  100  0.0   0:46.48 ksoftirqd/5
    22 root      20   0     0    0    0 R  100  0.0   0:29.10 ksoftirqd/6
    25 root      20   0     0    0    0 R  100  0.0   2:47.36 ksoftirqd/7
    28 root      20   0     0    0    0 R  100  0.0   0:33.73 ksoftirqd/8
    31 root      20   0     0    0    0 R  100  0.0   0:46.63 ksoftirqd/9
    40 root      20   0     0    0    0 R  100  0.0   0:45.33 ksoftirqd/12
    46 root      20   0     0    0    0 R  100  0.0   0:29.10 ksoftirqd/14
    49 root      20   0     0    0    0 R  100  0.0   0:47.35 ksoftirqd/15
    52 root      20   0     0    0    0 R  100  0.0   2:33.74 ksoftirqd/16
    55 root      20   0     0    0    0 R  100  0.0   0:46.92 ksoftirqd/17
    58 root      20   0     0    0    0 R  100  0.0   0:32.07 ksoftirqd/18
    67 root      20   0     0    0    0 R  100  0.0   0:46.63 ksoftirqd/21
    70 root      20   0     0    0    0 R  100  0.0   0:28.95 ksoftirqd/22
    73 root      20   0     0    0    0 R  100  0.0   0:45.03 ksoftirqd/23
     7 root      20   0     0    0    0 R   99  0.0   0:47.97 ksoftirqd/1
    37 root      20   0     0    0    0 R   99  0.0   2:42.29 ksoftirqd/11
    34 root      20   0     0    0    0 R   77  0.0   0:28.78 ksoftirqd/10
    64 root      20   0     0    0    0 R   76  0.0   0:30.34 ksoftirqd/20

So as we see almost all of the cores are doing softirqs.


  1. If we'll ever meet, I owe you a huge and tasty beer :D This article saved my masters degree!!! :D

  2. What a nice article! This is what exactly I need.
    But, once I applied RPS on a 8 cores machine. I start seeing system hang with the following back trace:
    INFO: rcu_sched self-detected stall on CPU { 4} (t=5251 jiffies g=207806 c=207805 q=168)
    CPU: 4 PID: 2646 Comm: bash Tainted: P O 3.12.19-linux #15
    INFO: rcu_sched detected stalls on CPUs/tasks: { 4} (detected by 5, t=5252 jiffies, g=207806, c=207805, q=168)
    Task dump for CPU 4:
    bash R running task 0 2646 2626 0x00000004
    Call Trace:
    [c0000000f3873910] [0000000000000001] 0x1 (unreliable)
    Call Trace:
    [c0000000f3872ee0] [c00000000000a144] .show_stack+0x168/0x278 (uable)
    [c0000000f3872fd0] [c0000000008ac730] .dump_stack+0x84/0xb0
    [c0000000f3873050] [c0000000000d18e0] .rcu_check_callbacks+0x3f8/0x868
    [c0000000f3873190] [c00000000005871c] .update_process_times+0x50/0x94
    [c0000000f3873220] [c0000000000b2500] .tick_sched_handle.isra.17+0x5c/0x7c
    [c0000000f38732b0] [c0000000000b2584] .tick_sched_timer+0x64/0xa0
    [c0000000f3873350] [c000000000079164] .__run_hrtimer+0xc0/0x250
    [c0000000f38733f0] [c00000000007a00c] .hrtimer_interrupt+0x144/0x31c
    [c0000000f3873500] [c000000000013140] .timer_interrupt+0x12c/0x270
    [c0000000f38735b0] [c00000000001d054] exc_0x900_common+0x104/0x108
    --- Exception: 901 at .smp_call_function_many+0x344/0x3d4
    LR = .smp_call_function_many+0x300/0x3d4
    [c0000000f38738a0] [c0000000000b9794] .smp_call_function_many+0x2dc/0x3d4 (unreliable)
    [c0000000f3873980] [c00000000002daac] .flush_tlb_mm+0xac/0xb4
    [c0000000f3873a20] [c00000000013fdac] .tlb_flush_mmu.part.77+0x3c/0xbc
    [c0000000f3873ab0] [c000000000140050] .tlb_finish_mmu+0x7c/0x80
    [c0000000f3873b30] [c000000000148c2c] .unmap_region+0xf4/0x144
    [c0000000f3873c60] [c00000000014b630] .do_munmap+0x27c/0x394
    [c0000000f3873d20] [c00000000014b79c] .vm_munmap+0x54/0x88
    [c0000000f3873db0] [c00000000014c774] .SyS_munmap+0x28/0x38
    [c0000000f3873e30] [c000000000000598] syscall_exit+0x0/0x8c

    1. Hi,

      you obviously faced a kernel bug. Unfortunately, I can't get the clue from the call trace. Meantime, RPS are quite old and stable feature, so believe you use unstable kernel or use some buggy drivers or custom kernel modules.

  3. Great find, answers some questions I've had for a long time

  4. Very nice article with a lot of explanations, you made my softirq struggle become much clearer.
    Thank you!