From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk0-f65.google.com (mail-vk0-f65.google.com [209.85.213.65]) by dpdk.org (Postfix) with ESMTP id 2FA6CC2EA for ; Wed, 8 Jun 2016 21:31:19 +0200 (CEST) Received: by mail-vk0-f65.google.com with SMTP id t7so2969209vkf.2 for ; Wed, 08 Jun 2016 12:31:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=DV9q9lSWNMBOIn/qSk92iuo1Hm2/G++l6PZPIWeBuFk=; b=B6j99+ZMggfVNHtUGckUces2qdL/0Ypjp7jwPUaWE7sUvYUw1mFsr0nhHveN9ErfwC mMGO9Ve3abJ55pIdUMWpZxmKi1gPgR62bz+fu+Wt8PIBHIpgjxwAH6majm6jeYWyBt/9 cqp5quhMnJe24JHnVQsZyb2itPnsw/QkOMisyTo3ies7Kw9OpgBE+xpZMbeFGiW+yXMd ecQBu6b3O903J7o8jrQMWp/ybkMEEZXUpTIG7c5EOcrjHIBuhj02kIfA3m0YqNhZ0hQd 3TrXrpLM7qf0mqytDp0gTpFTpAs0POgL5afIBDJx1jEloVUO0K72+SBeKa99UZw2RkEn PDHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=DV9q9lSWNMBOIn/qSk92iuo1Hm2/G++l6PZPIWeBuFk=; b=mGxEd3FY8TUc2P85WTJpMn2/KKq2EVYbD87SRhf7Tj5THzkLz54UphQyO3p4l/uc9U gppZL65B3vglh06uWvbmG3/ko37OM8JPFnXl0xQ0swV0VVfhsCPgXhh2tNzOyjd+lYox kNTGw31+fLtuC7d+CeNao1Xqa3JqWhZvQkLwTBl/H5S67FiSi+ULUUU0k5t2SKunv9Kf RSfX1SBTYALRf5KDL3h1iTV3MvsStufX07s73dTHV9gwD3048f9RhOtGmqOfAggaxTGR i+GgWCDfhKximDpLKyfSTDp8D2PjHwG6JNav5kl6Cv8FTBc2mi7kFXMA+lufGIwzkFwY 3VTg== X-Gm-Message-State: ALyK8tLjfz5zF8UeSWMQoRtvnriwU5Y6bhMXCaOWVb7UDzcmDb18cXkFn/1WUh33FcAPS3XtkyTZ6m2oNFmrCw== X-Received: by 10.159.36.168 with SMTP id 37mr3127254uar.136.1465414278506; Wed, 08 Jun 2016 12:31:18 -0700 (PDT) MIME-Version: 1.0 Received: by 10.159.37.183 with HTTP; Wed, 8 Jun 2016 12:31:17 -0700 (PDT) In-Reply-To: References: From: Cliff Burdick Date: Wed, 8 Jun 2016 12:31:17 -0700 Message-ID: To: Matt Laswell Cc: users Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] KNI Threads/Cores X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Jun 2016 19:31:19 -0000 Thanks Matt! I will try that. It seems very clean. On Wed, Jun 8, 2016 at 9:45 AM, Matt Laswell wrote: > Hey Cliff, > > I have a similar use case in my application. If you're willing to > dedicate an lcore per socket, another way to approach what you're > describing is to create a KNI interface thread that talks to the other > cores via message rings. That is, the cores that are interacting with the > NIC read a bunch of packets, determine if any of them need to go to KNI > and, if so, enqueue them using rte_ring_enqueue(). They also do a periodic > rte_ring_dequeue() on another queue to accept back any packets that come > back from KNI. > > The KNI interface process, meanwhile, just loops along, taking packets in > from the NIC interface threads via rte_ring_dequeue() and sending them to > KNI, and taking packets from KNI and returning them to the NIC interface > threads via rte_ring_enqueue(). > > I've found that this sort of scheme works well, and is reasonably clean > architecturally. Also, I found that calls into KNI can at times be very > slow. In my application, I would periodically see KNI calls take 50-100K > cycles, which can cause congestion if you're handling large volumes of > traffic. Letting a non-critical thread handle this interface was a big win > for me. > > This leaves the kernel side processing out, of course. But if the traffic > going to the kernel is lightweight, you likely don't need a dedicated core > for the kernel-side RX and TX work. > > -- > Matt Laswell > Principal Software Engineer > infinite io > > On Wed, Jun 8, 2016 at 11:30 AM, Cliff Burdick wrote: > >> Hi, I have an application with two sockets where each core I'm planning to >> transmit and receive a fairly large amount of traffic per core. Each core >> right now handles a single queue of either TX or RX of a given port. >> Across >> all the cores, I may be processing up to 12 ports. I also need to handle >> things like ARP and ping, so I'm going to add in the KNI driver to handle >> that. Since the amount of traffic I'm expecting that I'll need to forward >> to Linux is very small, it seems like I should be able to dedicate one >> lcore per socket to handle this functionality and have the dataplane cores >> pass the traffic off to this core using rte_kni_tx_burst(). >> >> My question is, first of all, is this possible? It seems like I can >> configure the KNI driver to start in "single thread" mode. From that >> point, >> I want to initialize one KNI device for each port, and have each kernel >> lcore on each processor handle that traffic. I believe if I call >> rte_kni_alloc with core_id set to the kernel lcore for each device, then >> in >> the end I'll have something like 6 KNI devices on socket one being handled >> by lcore 0, and 6 KNI devices on socket 2 being handled by lcore 31 as an >> example. Then my threads that are handling the dataplane tx/rx can simply >> be passed a pointer to their respective rte_kni device. Does this sound >> correct? >> >> Also, the sample says the core affinity needs to be set using taskset. Is >> that already taken care of with conf.core_id in rte_kni_alloc or do I >> still >> need to set it? >> >> Thanks >> > >