From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 8FDD11B7F3 for ; Fri, 14 Dec 2018 19:06:39 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Dec 2018 10:06:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,353,1539673200"; d="scan'208";a="283630218" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by orsmga005.jf.intel.com with ESMTP; 14 Dec 2018 10:06:37 -0800 Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.408.0; Fri, 14 Dec 2018 10:06:37 -0800 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.209]) by FMSMSX109.amr.corp.intel.com ([169.254.15.100]) with mapi id 14.03.0415.000; Fri, 14 Dec 2018 10:06:37 -0800 From: "Wiles, Keith" To: Harsh Patel CC: Stephen Hemminger , Kyle Larose , "users@dpdk.org" Thread-Topic: [dpdk-users] Query on handling packets Thread-Index: AQHUdzydRRkBFdv4fkKjO7j2RJcyb6VGGaEAgACGp4CAAAyagIABE0+AgAFRlQCAAnP3gIACAjmAgAC+mQCAAZQ9gIAAFqaAgARlHYCAAMRLgIACmg8AgATaFACAAyZygIAA0tiAgAgmHoCAAHMbAIAETeGAgBHQrwCAAAcRAA== Date: Fri, 14 Dec 2018 18:06:35 +0000 Message-ID: References: <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com> <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com> <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com> <485F0372-7486-473B-ACDA-F42A2D86EF03@intel.com> <34E92C48-A90C-472C-A915-AAA4A6B5CDE8@intel.com> <20181124203541.4aa9bbf2@xeon-e3> <1B6F92FD-D742-4377-896A-8D7DA6AAF799@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.204.220] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Query on handling packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Dec 2018 18:06:40 -0000 > On Dec 14, 2018, at 11:41 AM, Harsh Patel wrot= e: >=20 > Hello,=20 > It has been a big break since our last message.=20 > We want to inform you that we have tried a few things from which, we will= show some results which we think might me relevant for the progress.=20 >=20 > We thought that there might be some relation between the burst size and t= hroughput and thus we took a 10Mbps flow and a 20Mbps flow and changed burs= t size from 1,2,4,8,16,32 and so on till 256, which is the size of mbufpool= and we found out that the Throughput we get for all of these flows is abou= t the range of 8.5-9.0 Mbps which is the bottleneck for wireless environmen= t.=20 >=20 > Secondly, we modified the value of the variable in the equation to calcul= ate TX_TIMEOUT where we used rte_get_timer_hz()/2048 and we changed 2048 to= the values 16,32,64,...,16384. We are not able to see any difference in th= e performance. We were trying a lot of things and we thought may be this wa= s something that some effect. We guess now it doesn't. >=20 > Also, we showed that we replaced the code to use pointer arithmetic and a= llocated memory pool for Tx/Rx intermediate buffers to convert the single p= acket flow to burst and vice versa. In this code, we allocated a same memor= y pool which was used by both Tx buffer and the Rx buffer. We thought this = might have some effect and so we implemented a version where we had 2 separ= ate memory pools, 1 for Tx and 1 for Rx. But again in this case we are not = able to see any difference in the performance. >=20 > The modified code for the experiments is not available on the repository = for which we gave a link earlier. That code just contains some tweaks which= are not that important. In case, you can ask for it. Also the main code is= there on the repository which is working and up to date which you can have= a look at. >=20 > We wanted to inform you about this and would like to hear from you on wha= t else can we do to find out where the problem is. It would be really helpf= ul if you can point out the mistake or problem in the code or give an idea = is to what might be or what is creating this problem.=20 >=20 > We thank you for your time. Well I do not know why you get that level of performance. I assume you are = building your code with -O3 optimization. It must be something else as we k= now that DPDK performs well, but it could be the C++ code or ???? Can you try vTune or have access to that type of tool to analyze your compl= ete application? This seems to be the only direction to go now, which is to find a tool to m= easure the performance of the code to locate the bottlenecks. >=20 > Regards,=20 > Harsh and Hrishikesh >=20 > On Mon, 3 Dec 2018 at 15:07, Harsh Patel wrote= : > Hello, > The data mentioned in the previous mails are observations and the number = of threads mentioned are what the system is creating and not given by us to= the system. I'm not sure how to explain this by a picture but I will provi= de a text explanation. >=20 > First, we ran the Linux kernel code which uses raw sockets and we gave 2 = cores. That example used 2 threads on its own.=20 > Secondly, we ran our DPDK in ns-3 code and we the same number of cores i.= e. 2 cores. That example spawned 6 threads on its own. > (Note:- These are observations) > All of the above statistics were provided to answer the question if both = the simulations might be given different number of cores and may be that wa= s the reason of the performance bottleneck. Clearly they are both using sam= e no. of cores (2) and the results are what I have sent earlier. (Raw socke= t ~ 10 Mbps and DPDK ~ 2.5 Mbps) >=20 > Now we thought that we might give more cores to DPDK in ns-3 code, which = might improve its performance.=20 > This is where we gave 4 cores to our DPDK in ns-3 code which still spawne= d the same 6 threads. And it gave the same results as 2 cores for DPDK in n= s-3.=20 > This was the observation. >=20 > From this, we assume that the number of cores is not a reason for the les= s performance. This is not a problem we need to look somewhere else. > So, the problem due to which we are getting less performance and a bottle= neck around 2.5Mbps is somewhere else and we need to figure that out.=20 >=20 > Ask again if not clear. If clear, we need to see where the problem is and= can you help in finding the reason why this happennig? >=20 > Thanks & Regards,=20 > Harsh & Hrishikesh > =20 >=20 > On Fri, 30 Nov 2018 at 21:24, Wiles, Keith wrote: >=20 >=20 > > On Nov 30, 2018, at 3:02 AM, Harsh Patel wro= te: > >=20 > > Hello, > > Sorry for the long delay, we were busy with some exams. > >=20 > > 1) About the NUMA sockets > > This is the result of the command you mentioned :- > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > Core and Socket Information (as reported by '/sys/devices/system/cpu') > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >=20 > > cores =3D [0, 1, 2, 3] > > sockets =3D [0] > >=20 > > Socket 0 =20 > > -------- =20 > > Core 0 [0] =20 > > Core 1 [1] =20 > > Core 2 [2] =20 > > Core 3 [3] > >=20 > > We don't know much about this and would like your input on what else to= be checked or what do we need to do. > >=20 > > 2) The part where you asked for a graph=20 > > We used `ps` to analyse which CPU cores are being utilized. > > The raw socket version had two logical threads which used cores 0 and 1= . > > The DPDK version had 6 logical threads, which also used cores 0 and 1. = This is the case for which we showed you the results. > > As the previous case had 2 cores and was not giving desired results, we= tried to give more cores to see if the DPDK in ns-3 code can achieve the d= esired throughput and pps. (We thought giving more cores might improve the = performance.) > > For this new case, we provided 4 total cores using EAL arguments, upon= which, it used cores 0-3. And still we got the same results as the one sen= t earlier. > > We think this means that the bottleneck is a different problem unrelate= d to number of cores as of now. (This whole section is an answer to the que= stion in the last paragraph raised by Kyle to which Keith asked for a graph= ) >=20 > In the CPU output above you are running a four core system with no hyper-= threads. This means you only have four core and four threads in the terms o= f DPDK. Using 6 logical threads will not improve performance in the DPDK ca= se. DPDK normally uses a single thread per core. You can have more than one= pthread per core, but having more than one thread per code requires the so= ftware to switch threads. Having context switch is not a good performance w= in in most cases. >=20 > Not sure how your system is setup and a picture could help. >=20 > I will be traveling all next week and responses will be slow. >=20 > >=20 > > 3) About updating the TX_TIMEOUT and storing rte_get_timer_hz() =20 > > We have not tried this and will try it by today and will send you the s= tatus after that in some time.=20 > >=20 > > 4) For the suggestion by Stephen > > We are not clear on what you suggested and it would be nice if you elab= orate your suggestion. > >=20 > > Thanks and Regards,=20 > > Harsh and Hrishikesh > >=20 > > PS :- We are done with our exams and would be working now on this regul= arly.=20 > >=20 > > On Sun, 25 Nov 2018 at 10:05, Stephen Hemminger wrote: > > On Sat, 24 Nov 2018 16:01:04 +0000 > > "Wiles, Keith" wrote: > >=20 > > > > On Nov 22, 2018, at 9:54 AM, Harsh Patel = wrote: > > > >=20 > > > > Hi > > > >=20 > > > > Thank you so much for the reply and for the solution. > > > >=20 > > > > We used the given code. We were amazed by the pointer arithmetic yo= u used, got to learn something new. > > > >=20 > > > > But still we are under performing.The same bottleneck of ~2.5Mbps i= s seen. > > > >=20 > > > > We also checked if the raw socket was using any extra (logical) cor= es than the DPDK. We found that raw socket has 2 logical threads running on= 2 logical CPUs. Whereas, the DPDK version has 6 logical threads on 2 logic= al CPUs. We also ran the 6 threads on 4 logical CPUs, still we see the same= bottleneck. > > > >=20 > > > > We have updated our code (you can use the same links from previous = mail). It would be helpful if you could help us in finding what causes the = bottleneck. =20 > > >=20 > > > I looked at the code for a few seconds and noticed your TX_TIMEOUT is= macro that calls (rte_get_timer_hz()/2014) just to be safe I would not cal= l rte_get_timer_hz() time, but grab the value and store the hz locally and = use that variable instead. This will not improve performance is my guess an= d I would have to look at the code the that routine to see if it buys you a= nything to store the value locally. If the getting hz is just a simple read= of a variable then good, but still you should should a local variable with= in the object to hold the (rte_get_timer_hz()/2048) instead of doing the ca= ll and divide each time. > > >=20 > > > >=20 > > > > Thanks and Regards,=20 > > > > Harsh and Hrishikesh=20 > > > > =20 > > > >=20 > > > > On Mon, Nov 19, 2018, 19:19 Wiles, Keith wr= ote: > > > >=20 > > > > =20 > > > > > On Nov 17, 2018, at 4:05 PM, Kyle Larose w= rote: > > > > >=20 > > > > > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel wrote: =20 > > > > >>=20 > > > > >> Hello, > > > > >> Thanks a lot for going through the code and providing us with so= much > > > > >> information. > > > > >> We removed all the memcpy/malloc from the data path as you sugge= sted and =20 > > > > > ... =20 > > > > >> After removing this, we are able to see a performance gain but n= ot as good > > > > >> as raw socket. > > > > >> =20 > > > > >=20 > > > > > You're using an unordered_map to map your buffer pointers back to= the > > > > > mbufs. While it may not do a memcpy all the time, It will likely = end > > > > > up doing a malloc arbitrarily when you insert or remove entries f= rom > > > > > the map. If it needs to resize the table, it'll be even worse. Yo= u may > > > > > want to consider using librte_hash: > > > > > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even bett= er, > > > > > see if you can design the system to avoid needing to do a lookup = like > > > > > this. Can you return a handle with the mbuf pointer and the data > > > > > together? > > > > >=20 > > > > > You're also using floating point math where it's unnecessary (the > > > > > timing check). Just multiply the numerator by 1000000 prior to do= ing > > > > > the division. I doubt you'll overflow a uint64_t with that. It's = not > > > > > as efficient as integer math, though I'm not sure offhand it'd ca= use a > > > > > major perf problem. > > > > >=20 > > > > > One final thing: using a raw socket, the kernel will take over > > > > > transmitting and receiving to the NIC itself. that means it is fr= ee to > > > > > use multiple CPUs for the rx and tx. I notice that you only have = one > > > > > rx/tx queue, meaning at most one CPU can send and receive packets= . > > > > > When running your performance test with the raw socket, you may w= ant > > > > > to see how busy the system is doing packet sends and receives. Is= it > > > > > using more than one CPU's worth of processing? Is it using less, = but > > > > > when combined with your main application's usage, the overall sys= tem > > > > > is still using more than one? =20 > > > >=20 > > > > Along with the floating point math, I would remove all floating poi= nt math and use the rte_rdtsc() function to use cycles. Using something lik= e: > > > >=20 > > > > uint64_t cur_tsc, next_tsc, timo =3D (rte_timer_get_hz() / 16); /= * One 16th of a second use 2/4/8/16/32 power of two numbers to make the mat= h simple divide */ > > > >=20 > > > > cur_tsc =3D rte_rdtsc(); > > > >=20 > > > > next_tsc =3D cur_tsc + timo; /* Now next_tsc the next time to flush= */ > > > >=20 > > > > while(1) { > > > > cur_tsc =3D rte_rdtsc(); > > > > if (cur_tsc >=3D next_tsc) { > > > > flush(); > > > > next_tsc +=3D timo; > > > > } > > > > /* Do other stuff */ > > > > } > > > >=20 > > > > For the m_bufPktMap I would use the rte_hash or do not use a hash a= t all by grabbing the buffer address and subtract the > > > > mbuf =3D (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf= ) + RTE_MAX_HEADROOM); > > > >=20 > > > >=20 > > > > DpdkNetDevice:Write(uint8_t *buffer, size_t length) > > > > { > > > > struct rte_mbuf *pkt; > > > > uint64_t cur_tsc; > > > >=20 > > > > pkt =3D (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struc= t rte_mbuf) + RTE_MAX_HEADROOM); > > > >=20 > > > > /* No need to test pkt, but buffer maybe tested to make sur= e it is not null above the math above */ > > > >=20 > > > > pkt->pk_len =3D length; > > > > pkt->data_len =3D length; > > > >=20 > > > > rte_eth_tx_buffer(m_portId, 0, m_txBuffer, pkt); > > > >=20 > > > > cur_tsc =3D rte_rdtsc(); > > > >=20 > > > > /* next_tsc is a private variable */ > > > > if (cur_tsc >=3D next_tsc) { > > > > rte_eth_tx_buffer_flush(m_portId, 0, m_txBuffer); = /* hardcoded the queue id, should be fixed */ > > > > next_tsc =3D cur_tsc + timo; /* timo is a fixed num= ber of cycles to wait */ > > > > } > > > > return length; > > > > } > > > >=20 > > > > DpdkNetDevice::Read() > > > > { > > > > struct rte_mbuf *pkt; > > > >=20 > > > > if (m_rxBuffer->length =3D=3D 0) { > > > > m_rxBuffer->next =3D 0; > > > > m_rxBuffer->length =3D rte_eth_rx_burst(m_portId, 0= , m_rxBuffer->pmts, MAX_PKT_BURST); > > > >=20 > > > > if (m_rxBuffer->length =3D=3D 0) > > > > return std::make_pair(NULL, -1); > > > > } > > > >=20 > > > > pkt =3D m_rxBuffer->pkts[m_rxBuffer->next++]; > > > >=20 > > > > /* do not use rte_pktmbuf_read() as it does a copy for the = complete packet */ > > > >=20 > > > > return std:make_pair(rte_pktmbuf_mtod(pkt, char *), pkt->pk= t_len); > > > > } > > > >=20 > > > > void > > > > DpdkNetDevice::FreeBuf(uint8_t *buf) > > > > { > > > > struct rte_mbuf *pkt; > > > >=20 > > > > if (!buf) > > > > return; > > > > pkt =3D (struct rte_mbuf *)RTE_PKT_SUB(buf, sizeof(rte_mbuf= ) + RTE_MAX_HEADROOM); > > > >=20 > > > > rte_pktmbuf_free(pkt); > > > > } > > > >=20 > > > > When your code is done with the buffer, then convert the buffer add= ress back to a rte_mbuf pointer and call rte_pktmbuf_free(pkt); This should= eliminate the copy and floating point code. Converting my C code to C++ pr= iceless :-) > > > >=20 > > > > Hopefully the buffer address passed is the original buffer address = and has not be adjusted. > > > >=20 > > > >=20 > > > > Regards, > > > > Keith > > > > =20 > > >=20 > > > Regards, > > > Keith > > >=20 > >=20 > > Also rdtsc causes cpu to stop doing any look ahead, so there is a heise= nberg effect. > > Adding more rdtsc will hurt performance. It also looks like your code = is not doing bursting correctly. > > What if multiple packets arrive in one rx_burst? >=20 > Regards, > Keith >=20 Regards, Keith