From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw0-f174.google.com (mail-yw0-f174.google.com [209.85.161.174]) by dpdk.org (Postfix) with ESMTP id 01A5F2E8B for ; Fri, 25 Mar 2016 09:53:14 +0100 (CET) Received: by mail-yw0-f174.google.com with SMTP id h129so89681315ywb.1 for ; Fri, 25 Mar 2016 01:53:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=ehhoy8wYqC1zKoLYmfZrsVSGnHmD4TxE1TVffwzObwM=; b=Cfw7vZSdgG1q6iCccFL+wcG4lHxSsoehfRR0rh3mdYaKwcK0cwJFlXxll9FZ3/cBCQ X6vza8gwTyNH8ue0gsbmwr5JcsALwl3wrGEKF3iJIZUY8nkXUGe5eNk2Ir32fz+RAPFZ ELuN1nMXVRvsMChfnwBYzFkvxv/Kwu6hhRxLk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=ehhoy8wYqC1zKoLYmfZrsVSGnHmD4TxE1TVffwzObwM=; b=WTPkzzDUFLVUdztVs4jXDvm3acq980ZEkNamknU/OhbjKWcmsrZ9tq13es9S6Bnx71 650CgGsYlOxzvZJ2aH3JM/G3wdKQY7prIsK0vkuuQRmiWyC7J99Xpg4kiuOB6ydfpHSP WWAZjXXGKTFX2wi5dYlyobEpyrn/9cT8IiFngY4IRuiouwwy+m44k/0vOaB5stTjhjXJ tfwzERFncawekY+DVNbpAK7NXv+pZ1e/NreI9girxrBvGGVEB9aK5y9moJn8hfS1SNiL P/bHvuS7+Y3FmkerOmHdzzJh8mrmLr821kgFI0dMz4pujG0aiX3WGnc8bueQ64Flf8UO sJdQ== X-Gm-Message-State: AD7BkJKg7Q9G2z6nqGaBNfB71SDmIlCsgzJCeLLMNwhRKhkliuwV8O4eaXi6WbmuwJseEAewJv6ea8VcygmjuLhO MIME-Version: 1.0 X-Received: by 10.13.241.199 with SMTP id a190mr7566870ywf.47.1458895993485; Fri, 25 Mar 2016 01:53:13 -0700 (PDT) Received: by 10.37.209.212 with HTTP; Fri, 25 Mar 2016 01:53:13 -0700 (PDT) In-Reply-To: <2601191342CEEE43887BDE71AB97725836B1FAE1@irsmsx105.ger.corp.intel.com> References: <1457965558-15331-1-git-send-email-jianbo.liu@linaro.org> <6A0DE07E22DDAD4C9103DF62FEBC09090343BBF2@shsmsx102.ccr.corp.intel.com> <20160316111454.GB24668@bricha3-MOBL3> <20160318100358.GA4848@bricha3-MOBL3> <2601191342CEEE43887BDE71AB97725836B1FAE1@irsmsx105.ger.corp.intel.com> Date: Fri, 25 Mar 2016 16:53:13 +0800 Message-ID: From: Jianbo Liu To: "Ananyev, Konstantin" Cc: "Richardson, Bruce" , "Lu, Wenzhuo" , "Zhang, Helin" , "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 Subject: Re: [dpdk-dev] [PATCH] ixgbe: avoid unnessary break when checking at the tail of rx hwring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Mar 2016 08:53:14 -0000 On 22 March 2016 at 22:27, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: Jianbo Liu [mailto:jianbo.liu@linaro.org] >> Sent: Monday, March 21, 2016 2:27 AM >> To: Richardson, Bruce >> Cc: Lu, Wenzhuo; Zhang, Helin; Ananyev, Konstantin; dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] ixgbe: avoid unnessary break when checking at the tail of rx hwring >> >> On 18 March 2016 at 18:03, Bruce Richardson wrote: >> > On Thu, Mar 17, 2016 at 10:20:01AM +0800, Jianbo Liu wrote: >> >> On 16 March 2016 at 19:14, Bruce Richardson wrote: >> >> > On Wed, Mar 16, 2016 at 03:51:53PM +0800, Jianbo Liu wrote: >> >> >> Hi Wenzhuo, >> >> >> >> >> >> On 16 March 2016 at 14:06, Lu, Wenzhuo wrote: >> >> >> > HI Jianbo, >> >> >> > >> >> >> > >> >> >> >> -----Original Message----- >> >> >> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jianbo Liu >> >> >> >> Sent: Monday, March 14, 2016 10:26 PM >> >> >> >> To: Zhang, Helin; Ananyev, Konstantin; dev@dpdk.org >> >> >> >> Cc: Jianbo Liu >> >> >> >> Subject: [dpdk-dev] [PATCH] ixgbe: avoid unnessary break when checking at the >> >> >> >> tail of rx hwring >> >> >> >> >> >> >> >> When checking rx ring queue, it's possible that loop will break at the tail while >> >> >> >> there are packets still in the queue header. >> >> >> > Would you like to give more details about in what scenario this issue will be hit? Thanks. >> >> >> > >> >> >> >> >> >> vPMD will place extra RTE_IXGBE_DESCS_PER_LOOP - 1 number of empty >> >> >> descriptiors at the end of hwring to avoid overflow when do checking >> >> >> on rx side. >> >> >> >> >> >> For the loop in _recv_raw_pkts_vec(), we check 4 descriptors each >> >> >> time. If all 4 DD are set, and all 4 packets are received.That's OK in >> >> >> the middle. >> >> >> But if come to the end of hwring, and less than 4 descriptors left, we >> >> >> still need to check 4 descriptors at the same time, so the extra empty >> >> >> descriptors are checked with them. >> >> >> This time, the number of received packets is apparently less than 4, >> >> >> and we break out of the loop because of the condition "var != >> >> >> RTE_IXGBE_DESCS_PER_LOOP". >> >> >> So the problem arises. It is possible that there could be more packets >> >> >> at the hwring beginning that still waiting for being received. >> >> >> I think this fix can avoid this situation, and at least reduce the >> >> >> latency for the packets in the header. >> >> >> >> >> > Packets are always received in order from the NIC, so no packets ever get left >> >> > behind or skipped on an RX burst call. >> >> > >> >> > /Bruce >> >> > >> >> >> >> I knew packets are received in order, and no packets will be skipped, >> >> but some will be left behind as I explained above. >> >> vPMD will not received nb_pkts required by one RX burst call, and >> >> those at the beginning of hwring are still waiting to be received till >> >> the next call. >> >> >> >> Thanks! >> >> Jianbo >> > HI Jianbo, >> > >> > ok, I understand now. I'm not sure that this is a significant problem though, >> > since we are working in polling mode. Is there a performance impact to your >> > change, because I don't think that we can reduce performance just to fix this? >> > >> > Regards, >> > /Bruce >> It will be a problem because the possibility could be high. >> Considering rx hwring size is 128 and rx burst is 32, the possiblity >> can be 32/128. >> I know this change is critical, so I want you (and maintainers) to do >> full evaluations about throughput/latency..before making conclusion. > > I am still not sure what is a problem you are trying to solve here. > Yes recv_raw_pkts_vec() call wouldn't wrap around HW ring boundary, > and yes can return less packets that are actually available by the HW. > Though as Bruce pointed, they'll be returned to the user by next call. Have you thought of the interval between these two call, how long could it be? If application is a simple one like l2fwd/testpmd, that's fine. But if the interval is long because application has more work to do, they are different. > Actually recv_pkts_bulk_alloc() works in a similar way. > Why do you consider that as a problem? Driver should pull packets out of hardware and give them to APP as fast as possible. If not, there is a possibility that overflow the hardware queue by more incoming packets. I did some testings with pktgen-dpdk, and it behaves a little better with this patch (at least not worse). Sorry I can't provide more concreate evidences because I don't have ixia/sprint equipment at hand. That's why I asked you to do full evaluations before reject this patch. :-) Thanks! > Konstantin > >> >> Jianbo