From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com [209.85.217.180]) by dpdk.org (Postfix) with ESMTP id 86AD13784 for ; Tue, 14 Apr 2015 13:58:23 +0200 (CEST) Received: by lbbuc2 with SMTP id uc2so6183211lbb.2 for ; Tue, 14 Apr 2015 04:58:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=H+gMMBC23rSoZqndMlL7uwYYJJp8PqakCkgfgVWz8bE=; b=gTlSO+bdSpaMuiJikdmW42rMmNpZHtrdMe/wZSgo/uYTESBiFuBXfOzlTGSjHh1R91 HREEmjK/CmGe0mGOq27TRv6TwlezA37Pcf/VnCFJsZEv453J6vZmR+gyYusrDKKKvRz3 +ikzJJz8LFjQelDO1hP1NRAkUb7gzm9A+R/oTpnsJBeEa4LZU7VdC0TUjSvIUeRptXwj 0Z4WRcFpmTvkVtQzxhAGtP2lSxU5czqfcJJmHdvVETgFfRPOMBcgN4EBPfaDclcf53tR ON4T85D03AT5cGrY6ADWw9FLQe5/wztX2IdZxyrvWrZLXWu6N2UlWuQCrk7KWcSFZZL0 1HNQ== MIME-Version: 1.0 X-Received: by 10.152.234.108 with SMTP id ud12mr18009311lac.81.1429012703179; Tue, 14 Apr 2015 04:58:23 -0700 (PDT) Received: by 10.25.41.201 with HTTP; Tue, 14 Apr 2015 04:58:23 -0700 (PDT) In-Reply-To: <20150406134329.1f613e92@urahara> References: <20150406134329.1f613e92@urahara> Date: Tue, 14 Apr 2015 14:58:23 +0300 Message-ID: From: Dor Green To: Stephen Hemminger Content-Type: text/plain; charset=UTF-8 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] rte_ring's dequeue appears to be slow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Apr 2015 11:58:23 -0000 Dequeuing is done in bulk (that's what shows as CPU consuming, as well). Enqueuing is not due to some constraint we have. It seemed likely that the dequeue function is falsely blamed for taking up CPU, but in my tests there are constant incoming packets so I don't see when it will poll and receive no packets. Any other ideas to check? On Mon, Apr 6, 2015 at 11:43 PM, Stephen Hemminger wrote: > On Mon, 6 Apr 2015 15:18:21 +0300 > Dor Green wrote: > >> I have an app which captures packets on a single core and then passes >> to multiple workers on different lcores, using the ring queues. >> >> While I manage to capture packets at 10Gbps, when I send it to the >> processing lcores there is substantial packet loss. At first I figured >> it's the processing I do on the packets and optimized that, which did >> help it a little but did not alleviate the problem. >> >> I used Intel VTune amplifier to profile the program, and on all >> profiling checks that I did there, the majority of the time in the >> program is spent in "__rte_ring_sc_do_dequeue" (about 70%). I was >> wondering if anyone can tell me how to optimize this, or if I'm using >> the queues incorrectly, or maybe even doing the profiling wrong >> (because I do find it weird that this dequeuing is so slow). >> >> My program architecture is as follows (replaced consts with actual values): >> >> A queue is created for each processing lcore: >> rte_ring_create(qname, swsize, NUMA_SOCKET, 1024*1024, >> RING_F_SP_ENQ | RING_F_SC_DEQ); >> >> The processing core enqueues packets one by one, to each of the queues >> (the packet burst size is 256): >> rte_ring_sp_enqueue(lc[queue_index].queue, (void *const)pkts[i]); >> >> Which are then dequeued in bulk in the processor lcores: >> rte_ring_sc_dequeue_bulk(lc->queue, (void**) &mbufs, 128); >> >> I'm using 16 1GB hugepages, running the new 2.0 version. If there's >> any further info required about the program, let me know. >> >> Thank you. > > First off, make sure you are enqueuing and dequeuing in bursts > if possible. That saves a lot of the overhead. > > Also, with polling applications, the dequeue function can be > falsely blamed for taking CPU, if most of the time the poll does > not succeed in finding any data.