From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it0-f46.google.com (mail-it0-f46.google.com [209.85.214.46]) by dpdk.org (Postfix) with ESMTP id 525B0B3D6 for ; Wed, 8 Jun 2016 18:45:27 +0200 (CEST) Received: by mail-it0-f46.google.com with SMTP id z189so131728492itg.0 for ; Wed, 08 Jun 2016 09:45:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=infinite-io.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=emw2uzX7YpqNdCLt3OMT5MeyMBApk7nEsDPxURvixbI=; b=q3eiZoo35JWWeQcON8kOyrxIOLMELOUdDsjQVCBReXskGEebupSS3Z1wI5q61eHsix arA1+QbCfw2S8gUtnC7wNdMyodWYHOigoEZeCDcUClbNx+XACO/ZwsmKuhgm4aA9Vpqq 28Re+HwDbpivAxtywOOt49JW7IP6UWxz9yukLqb5XZUDXHWvgtYV+6VYUauMVbzwpryK DutfEtDkjYpGJOo5JZV/VXwT5f1wN89S0XWDDh0wAoCZ9PvNmJTBPMEjouMRF/w3Sheb JUDMPmOcg8PKIz+/YVcUy/kesnaJ3FlpSLm24VDnjiKgkIDfKHvQyBFfbG+dqvoZvOP+ S0RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=emw2uzX7YpqNdCLt3OMT5MeyMBApk7nEsDPxURvixbI=; b=cEIOE2EozNEVUU7y5bV+A5QTwjQbxKJDzB5+eC7mQGCvT2nU7a/W2f3BkPd9wE92oq VYfYCBVHL2tPhDUgbqcYa0oFhkJ/i30g7i9ocD52AKyDTZA9MojnTTxkZMopCTAFVNjr mrrMJdUQo6VDsV3rDOzL0nuEFsvYlCtNyBhKmPQ28ibk3dfAWlXNuxn+DGZS4vU4clAC J1QT8bIS8vGbhH9N3YYHjGAdBguq3zSQJG8F2LsX06y4WkJ1ozSSvCEoX1nDPO7IiTSP OYRr+XjqPItyVR+zIWKaXoKg8u54qo9tI321tDfG68VwHjhO3STXm/U6Tl3dhe+IFVwj KdPw== X-Gm-Message-State: ALyK8tIZyHH3ydsd7ChnWbq9GdO+jYjIP5ESaZDHxRBnFYOIPW23vutwhNo8bdhgv1hf/qdmtvin2WDpVR8iAg== MIME-Version: 1.0 X-Received: by 10.36.223.65 with SMTP id r62mr10247592itg.1.1465404326751; Wed, 08 Jun 2016 09:45:26 -0700 (PDT) Received: by 10.107.47.36 with HTTP; Wed, 8 Jun 2016 09:45:26 -0700 (PDT) In-Reply-To: References: Date: Wed, 8 Jun 2016 11:45:26 -0500 Message-ID: From: Matt Laswell To: Cliff Burdick Cc: users Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] KNI Threads/Cores X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2016 16:45:27 -0000 Hey Cliff, I have a similar use case in my application. If you're willing to dedicate an lcore per socket, another way to approach what you're describing is to create a KNI interface thread that talks to the other cores via message rings. That is, the cores that are interacting with the NIC read a bunch of packets, determine if any of them need to go to KNI and, if so, enqueue them using rte_ring_enqueue(). They also do a periodic rte_ring_dequeue() on another queue to accept back any packets that come back from KNI. The KNI interface process, meanwhile, just loops along, taking packets in from the NIC interface threads via rte_ring_dequeue() and sending them to KNI, and taking packets from KNI and returning them to the NIC interface threads via rte_ring_enqueue(). I've found that this sort of scheme works well, and is reasonably clean architecturally. Also, I found that calls into KNI can at times be very slow. In my application, I would periodically see KNI calls take 50-100K cycles, which can cause congestion if you're handling large volumes of traffic. Letting a non-critical thread handle this interface was a big win for me. This leaves the kernel side processing out, of course. But if the traffic going to the kernel is lightweight, you likely don't need a dedicated core for the kernel-side RX and TX work. -- Matt Laswell Principal Software Engineer infinite io On Wed, Jun 8, 2016 at 11:30 AM, Cliff Burdick wrote: > Hi, I have an application with two sockets where each core I'm planning to > transmit and receive a fairly large amount of traffic per core. Each core > right now handles a single queue of either TX or RX of a given port. Across > all the cores, I may be processing up to 12 ports. I also need to handle > things like ARP and ping, so I'm going to add in the KNI driver to handle > that. Since the amount of traffic I'm expecting that I'll need to forward > to Linux is very small, it seems like I should be able to dedicate one > lcore per socket to handle this functionality and have the dataplane cores > pass the traffic off to this core using rte_kni_tx_burst(). > > My question is, first of all, is this possible? It seems like I can > configure the KNI driver to start in "single thread" mode. From that point, > I want to initialize one KNI device for each port, and have each kernel > lcore on each processor handle that traffic. I believe if I call > rte_kni_alloc with core_id set to the kernel lcore for each device, then in > the end I'll have something like 6 KNI devices on socket one being handled > by lcore 0, and 6 KNI devices on socket 2 being handled by lcore 31 as an > example. Then my threads that are handling the dataplane tx/rx can simply > be passed a pointer to their respective rte_kni device. Does this sound > correct? > > Also, the sample says the core affinity needs to be set using taskset. Is > that already taken care of with conf.core_id in rte_kni_alloc or do I still > need to set it? > > Thanks >