From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-it1-f176.google.com (mail-it1-f176.google.com [209.85.166.176]) by dpdk.org (Postfix) with ESMTP id 2B5B88E76 for ; Tue, 5 Feb 2019 07:37:18 +0100 (CET) Received: by mail-it1-f176.google.com with SMTP id i145so5742441ita.4 for ; Mon, 04 Feb 2019 22:37:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=nt+u+VzAXsb875yCx2EA9QbTLkcdcbcn9Ijsi5GFfyg=; b=KT2Q8JNWDh93nTiEzfIAqAD0Hhtb2WV+UrwBPKawPgJJSxDl4EV5F6iI1bYAYx+d0k e4aK+HGLXTyIm7AQQmgHk78anH+T0vancNLaAHck02IGf/Zhmg4HKQJqJ2qZu13pDG0x 47h/xqWr8ObsW6GhvkJHkdwqMIAJ0Ntn57tQ67q/zt8zIQ87OPvGqgnn/FTdxfCuwlc2 k22th6Pg74lxWncNRIS9sRYicdbZ66YK8yiST8GoVkCZ8rIOhuWI/NZ61COGMRp9TI8i s1Lw+WOhxhWvSvmXGMnl7JnC8NSXJ7Yf59H8Gk3iIZbq5vN7h91dy3YDK9CPNZ/D/8gL JQvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=nt+u+VzAXsb875yCx2EA9QbTLkcdcbcn9Ijsi5GFfyg=; b=JqdUYbGwYpWooRXC6f/h+JbcF5XVSQuLh7Ez/b2N/OKICsya5qUAWMmR1S01L0SWPw ey5kE9JCnY2L4RAqzGmleTrStQ3tF077HVf0afdfc9TIe/+ZLxkqG1FsQYJF/SW9exVR /IiH+he8Np4L+PylBBySe2/PwKmjmH1BcvZFwwq9Tt1PHzyHpICx8aPtLKUsXThZEcDd SCHCOp1XZsTNVi1v3jac+qiHlIlBxQmcWcy7RnKGBrWUYUcly5AqOVFsIPZyKjRZDSXq 2FVnW9xIjsqUwRS+FDumoz/x/VMRMJQUiajb0+wNqmDt3hiuYXHGmcL+agy0E7B4N55R Y1uw== X-Gm-Message-State: AHQUAuYBFPfnu4ormnd5p6cq8ViqtWNqsonb4A44W2t0THtYwGhWQ0P8 sH+a0VXJk3vSea+AvnjOJrQQcoaiv8PZkRCWjMI= X-Google-Smtp-Source: AHgI3IaMiPISUsK2mK3LZ+WYk0Gubap9Eqbekis3moQXqs6dkkumfWYk4OtEnE/Wn+jMIsm7eaUbsitpVuPbRlu2W2U= X-Received: by 2002:a6b:1807:: with SMTP id 7mr1714871ioy.167.1549348637162; Mon, 04 Feb 2019 22:37:17 -0800 (PST) MIME-Version: 1.0 References: <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com> <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com> <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com> <485F0372-7486-473B-ACDA-F42A2D86EF03@intel.com> <34E92C48-A90C-472C-A915-AAA4A6B5CDE8@intel.com> <20181124203541.4aa9bbf2@xeon-e3> <1B6F92FD-D742-4377-896A-8D7DA6AAF799@intel.com> <72A7DD4D-35FD-4247-805D-E9A736B1C9B6@intel.com> <5F05CD7D-2EAB-476A-99B6-031CF835BA37@intel.com> <05A7519B-28EC-4A34-812E-A50A50F16A8A@intel.com> In-Reply-To: <05A7519B-28EC-4A34-812E-A50A50F16A8A@intel.com> From: Harsh Patel Date: Tue, 5 Feb 2019 12:07:05 +0530 Message-ID: To: "Wiles, Keith" Cc: Stephen Hemminger , Kyle Larose , "users@dpdk.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Query on handling packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Feb 2019 06:37:18 -0000 Hi, We would like to inform you that our code is working as expected and we are able to obtain 95-98 Mbps data rate for a 100Mbps application rate. We are now working on the testing of the code. Thanks a lot, especially to Keith for all the help you provided. We have 2 main queries :- 1) We wanted to calculate Backlog at the NIC Tx Descriptors but were not able to find anything in the documentation. Can you help us in how to calculate the backlog? 2) We searched on how to use Byte Queue Limit (BQL) on the NIC queue but couldn't find anything like that in DPDK. Does DPDK support BQL? If so, can you help us on how to use it for our project? Thanks & Regards Harsh & Hrishikesh On Thu, 31 Jan 2019 at 22:28, Wiles, Keith wrote: > > > Sent from my iPhone > > On Jan 30, 2019, at 5:36 PM, Harsh Patel wrote= : > > Hello, > > This mail is to inform you that the integration of DPDK is working with > ns-3 on a basic level. The model is running. > For UDP traffic we are getting throughput same or better than raw socket. > (Around 100Mbps) > But unfortunately for TCP, there are burst packet losses due to which the > throughput is drastically affected after some point of time. The bandwidt= h > of the link used was 100Mbps. > We have obtained cwnd and ssthresh graphs which show that once the flow > gets out from Slow Start mode, there are so many packet losses that the > congestion window & the slow start threshold is not able to go above 4-5 > packets. > > > Can you determine where the packets are being dropped? > > We have attached the graphs with this mail. > > > I do not see the graphs attached but that=E2=80=99s OK. > > We would like to know if there is any reason to this or how can we fix > this. > > > I think we have to find out where the packets are being dropped this is > the only reason for the case to your referring to. > > > Thanks & Regards > Harsh & Hrishikesh > > On Wed, 16 Jan 2019 at 19:25, Harsh Patel > wrote: > >> Hi >> >> We were able to optimise the DPDK version. There were couple of things w= e >> needed to do. >> >> We were using tx timeout as 1s/2048, which we found out to be very less. >> Then we increased the timeout, but we were getting lot of retransmission= s. >> >> So we removed the timeout and sent single packet as soon as we get it. >> This increased the throughput. >> >> Then we used DPDK feature to launch function on core, and gave a >> dedicated core for Rx. This increased the throughput further. >> >> The code is working really well for low bandwidth (<~50Mbps) and is >> outperforming raw socket version. >> But for high bandwidth, we are getting packet length mismatches for some >> reason. We are investigating it. >> >> We really thank you for the suggestions given by you and also for keepin= g >> the patience for last couple of months. >> >> Thank you >> >> Regards, >> Harsh & Hrishikesh >> >> On Fri, Jan 4, 2019, 11:27 Harsh Patel wrote: >> >>> Yes that would be helpful. >>> It'd be ok for now to use the same dpdk version to overcome the build >>> issues. >>> We will look into updating the code for latest versions once we get pas= t >>> this problem. >>> >>> Thank you very much. >>> >>> Regards, >>> Harsh & Hrishikesh >>> >>> On Fri, Jan 4, 2019, 04:13 Wiles, Keith wrote: >>> >>>> >>>> >>>> > On Jan 3, 2019, at 12:12 PM, Harsh Patel >>>> wrote: >>>> > >>>> > Hi >>>> > >>>> > We applied your suggestion of removing the `IsLinkUp()` call. But th= e >>>> performace is even worse. We could only get around 340kbits/s. >>>> > >>>> > The Top Hotspots are: >>>> > >>>> > Function Module CPU Time >>>> > eth_em_recv_pkts librte_pmd_e1000.so 15.106s >>>> > rte_delay_us_block librte_eal.so.6.1 7.372s >>>> > ns3::DpdkNetDevice::Read libns3.28.1-fd-net-device-debug.so >>>> 5.080s >>>> > rte_eth_rx_burst libns3.28.1-fd-net-device-debug.so 3.558s >>>> > ns3::DpdkNetDeviceReader::DoRead >>>> libns3.28.1-fd-net-device-debug.so 3.364s >>>> > [Others] 4.760s >>>> >>>> Performance reduced by removing that link status check, that is weird. >>>> > >>>> > Upon checking the callers of `rte_delay_us_block`, we got to know >>>> that most of the time (92%) spent in this function is during initializ= ation. >>>> > This does not waste our processing time during communication. So, >>>> it's a good start to our optimization. >>>> > >>>> > Callers CPU Time: Total CPU Time: Self >>>> > rte_delay_us_block 100.0% 7.372s >>>> > e1000_enable_ulp_lpt_lp 92.3% 6.804s >>>> > e1000_write_phy_reg_mdic 1.8% 0.136s >>>> > e1000_reset_hw_ich8lan 1.7% 0.128s >>>> > e1000_read_phy_reg_mdic 1.4% 0.104s >>>> > eth_em_link_update 1.4% 0.100s >>>> > e1000_get_cfg_done_generic 0.7% 0.052s >>>> > e1000_post_phy_reset_ich8lan.part.18 0.7% 0.048s >>>> >>>> I guess you are having vTune start your application and that is why yo= u >>>> have init time items in your log. I normally start my application and = then >>>> attach vtune to the application. One of the options in configuration o= f >>>> vtune for that project is to attach to the application. Maybe it would= help >>>> hear. >>>> >>>> Looking at the data you provided it was ok. The problem is it would no= t >>>> load the source files as I did not have the same build or executable. = I >>>> tried to build the code, but it failed to build and I did not go furth= er. I >>>> guess I would need to see the full source tree and the executable you = used >>>> to really look at the problem. I have limited time, but I can try if y= ou >>>> like. >>>> > >>>> > >>>> > Effective CPU Utilization: 21.4% (0.856 out of 4) >>>> > >>>> > Here is the link to vtune profiling results. >>>> https://drive.google.com/open?id=3D1M6g2iRZq2JGPoDVPwZCxWBo7qzUhvWi5 >>>> > >>>> > Thank you >>>> > >>>> > Regards >>>> > >>>> > On Sun, Dec 30, 2018, 06:00 Wiles, Keith >>>> wrote: >>>> > >>>> > >>>> > > On Dec 29, 2018, at 4:03 PM, Harsh Patel >>>> wrote: >>>> > > >>>> > > Hello, >>>> > > As suggested, we tried profiling the application using Intel VTune >>>> Amplifier. We aren't sure how to use these results, so we are attachin= g >>>> them to this email. >>>> > > >>>> > > The things we understood were 'Top Hotspots' and 'Effective CPU >>>> utilization'. Following are some of our understandings: >>>> > > >>>> > > Top Hotspots >>>> > > >>>> > > Function Module CPU Time >>>> > > rte_delay_us_block librte_eal.so.6.1 15.042s >>>> > > eth_em_recv_pkts librte_pmd_e1000.so 9.544s >>>> > > ns3::DpdkNetDevice::Read libns3.28.1-fd-net-device-debug.so >>>> 3.522s >>>> > > ns3::DpdkNetDeviceReader::DoRead >>>> libns3.28.1-fd-net-device-debug.so 2.470s >>>> > > rte_eth_rx_burst libns3.28.1-fd-net-device-debug.so >>>> 2.456s >>>> > > [Others] 6.656s >>>> > > >>>> > > We knew about other methods except `rte_delay_us_block`. So we >>>> investigated the callers of this method: >>>> > > >>>> > > Callers Effective Time Spin Time Overhead Time Effective >>>> Time Spin Time Overhead Time Wait Time: Total Wait Tim= e: >>>> Self >>>> > > e1000_enable_ulp_lpt_lp 45.6% 0.0% 0.0% 6.860s 0usec >>>> 0usec >>>> > > e1000_write_phy_reg_mdic 32.7% 0.0% 0.0% 4.916s >>>> 0usec 0usec >>>> > > e1000_read_phy_reg_mdic 19.4% 0.0% 0.0% 2.922s 0usec >>>> 0usec >>>> > > e1000_reset_hw_ich8lan 1.0% 0.0% 0.0% 0.143s 0usec >>>> 0usec >>>> > > eth_em_link_update 0.7% 0.0% 0.0% 0.100s 0usec >>>> 0usec >>>> > > e1000_post_phy_reset_ich8lan.part.18 0.4% 0.0% 0.0% >>>> 0.064s 0usec 0usec >>>> > > e1000_get_cfg_done_generic 0.2% 0.0% 0.0% 0.037s >>>> 0usec 0usec >>>> > > >>>> > > We lack sufficient knowledge to investigate more than this. >>>> > > >>>> > > Effective CPU utilization >>>> > > >>>> > > Interestingly, the effective CPU utilization was 20.8% (0.832 out >>>> of 4 logical CPUs). We thought this is less. So we compared this with = the >>>> raw-socket version of the code, which was even less, 8.0% (0.318 out o= f 4 >>>> logical CPUs), and even then it is performing way better. >>>> > > >>>> > > It would be helpful if you give us insights on how to use these >>>> results or point us to some resources to do so. >>>> > > >>>> > > Thank you >>>> > > >>>> > >>>> > BTW, I was able to build ns3 with DPDK 18.11 it required a couple >>>> changes in the DPDK init code in ns3 plus one hack in rte_mbuf.h file. >>>> > >>>> > I did have a problem including rte_mbuf.h file into your code. It >>>> appears the g++ compiler did not like referencing the struct rte_mbuf_= sched >>>> inside the rte_mbuf structure. The rte_mbuf_sched was inside the big u= nion >>>> as a hack I moved the struct outside of the rte_mbuf structure and rep= laced >>>> the struct in the union with =E2=80=99struct rte_mbuf_sched sched;', b= ut I am >>>> guessing you are missing some compiler options in your build system as= DPDK >>>> builds just fine without that hack. >>>> > >>>> > The next place was the rxmode and the txq_flags. The rxmode structur= e >>>> has changed and I commented out the inits in ns3 and then commented ou= t the >>>> txq_flags init code as these are now the defaults. >>>> > >>>> > Regards, >>>> > Keith >>>> > >>>> >>>> Regards, >>>> Keith >>>> >>>> > > > >