From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id E870B1B55E for ; Fri, 30 Nov 2018 16:54:13 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Nov 2018 07:54:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,299,1539673200"; d="scan'208";a="100439897" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga008.fm.intel.com with ESMTP; 30 Nov 2018 07:54:12 -0800 Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.408.0; Fri, 30 Nov 2018 07:54:12 -0800 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.100]) by FMSMSX155.amr.corp.intel.com ([169.254.5.69]) with mapi id 14.03.0415.000; Fri, 30 Nov 2018 07:54:12 -0800 From: "Wiles, Keith" To: Harsh Patel CC: Stephen Hemminger , Kyle Larose , "users@dpdk.org" Thread-Topic: [dpdk-users] Query on handling packets Thread-Index: AQHUdzydRRkBFdv4fkKjO7j2RJcyb6VGGaEAgACGp4CAAAyagIABE0+AgAFRlQCAAnP3gIACAjmAgAC+mQCAAZQ9gIAAFqaAgARlHYCAAMRLgIACmg8AgATaFACAAyZygIAA0tiAgAgmHoCAAHMbAA== Date: Fri, 30 Nov 2018 15:54:11 +0000 Message-ID: <1B6F92FD-D742-4377-896A-8D7DA6AAF799@intel.com> References: <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com> <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com> <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com> <485F0372-7486-473B-ACDA-F42A2D86EF03@intel.com> <34E92C48-A90C-472C-A915-AAA4A6B5CDE8@intel.com> <20181124203541.4aa9bbf2@xeon-e3> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.9.178] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Query on handling packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Nov 2018 15:54:14 -0000 > On Nov 30, 2018, at 3:02 AM, Harsh Patel wrote= : >=20 > Hello, > Sorry for the long delay, we were busy with some exams. >=20 > 1) About the NUMA sockets > This is the result of the command you mentioned :- > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > Core and Socket Information (as reported by '/sys/devices/system/cpu') > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >=20 > cores =3D [0, 1, 2, 3] > sockets =3D [0] >=20 > Socket 0 =20 > -------- =20 > Core 0 [0] =20 > Core 1 [1] =20 > Core 2 [2] =20 > Core 3 [3] >=20 > We don't know much about this and would like your input on what else to b= e checked or what do we need to do. >=20 > 2) The part where you asked for a graph=20 > We used `ps` to analyse which CPU cores are being utilized. > The raw socket version had two logical threads which used cores 0 and 1. > The DPDK version had 6 logical threads, which also used cores 0 and 1. Th= is is the case for which we showed you the results. > As the previous case had 2 cores and was not giving desired results, we t= ried to give more cores to see if the DPDK in ns-3 code can achieve the des= ired throughput and pps. (We thought giving more cores might improve the pe= rformance.) > For this new case, we provided 4 total cores using EAL arguments, upon w= hich, it used cores 0-3. And still we got the same results as the one sent = earlier. > We think this means that the bottleneck is a different problem unrelated = to number of cores as of now. (This whole section is an answer to the quest= ion in the last paragraph raised by Kyle to which Keith asked for a graph) In the CPU output above you are running a four core system with no hyper-th= reads. This means you only have four core and four threads in the terms of = DPDK. Using 6 logical threads will not improve performance in the DPDK case= . DPDK normally uses a single thread per core. You can have more than one p= thread per core, but having more than one thread per code requires the soft= ware to switch threads. Having context switch is not a good performance win= in most cases. Not sure how your system is setup and a picture could help. I will be traveling all next week and responses will be slow. >=20 > 3) About updating the TX_TIMEOUT and storing rte_get_timer_hz() =20 > We have not tried this and will try it by today and will send you the sta= tus after that in some time.=20 >=20 > 4) For the suggestion by Stephen > We are not clear on what you suggested and it would be nice if you elabor= ate your suggestion. >=20 > Thanks and Regards,=20 > Harsh and Hrishikesh >=20 > PS :- We are done with our exams and would be working now on this regular= ly.=20 >=20 > On Sun, 25 Nov 2018 at 10:05, Stephen Hemminger wrote: > On Sat, 24 Nov 2018 16:01:04 +0000 > "Wiles, Keith" wrote: >=20 > > > On Nov 22, 2018, at 9:54 AM, Harsh Patel w= rote: > > >=20 > > > Hi > > >=20 > > > Thank you so much for the reply and for the solution. > > >=20 > > > We used the given code. We were amazed by the pointer arithmetic you = used, got to learn something new. > > >=20 > > > But still we are under performing.The same bottleneck of ~2.5Mbps is = seen. > > >=20 > > > We also checked if the raw socket was using any extra (logical) cores= than the DPDK. We found that raw socket has 2 logical threads running on 2= logical CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical= CPUs. We also ran the 6 threads on 4 logical CPUs, still we see the same b= ottleneck. > > >=20 > > > We have updated our code (you can use the same links from previous ma= il). It would be helpful if you could help us in finding what causes the bo= ttleneck. =20 > >=20 > > I looked at the code for a few seconds and noticed your TX_TIMEOUT is m= acro that calls (rte_get_timer_hz()/2014) just to be safe I would not call = rte_get_timer_hz() time, but grab the value and store the hz locally and us= e that variable instead. This will not improve performance is my guess and = I would have to look at the code the that routine to see if it buys you any= thing to store the value locally. If the getting hz is just a simple read o= f a variable then good, but still you should should a local variable within= the object to hold the (rte_get_timer_hz()/2048) instead of doing the call= and divide each time. > >=20 > > >=20 > > > Thanks and Regards,=20 > > > Harsh and Hrishikesh=20 > > > =20 > > >=20 > > > On Mon, Nov 19, 2018, 19:19 Wiles, Keith wrot= e: > > >=20 > > > =20 > > > > On Nov 17, 2018, at 4:05 PM, Kyle Larose wro= te: > > > >=20 > > > > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel wrote: =20 > > > >>=20 > > > >> Hello, > > > >> Thanks a lot for going through the code and providing us with so m= uch > > > >> information. > > > >> We removed all the memcpy/malloc from the data path as you suggest= ed and =20 > > > > ... =20 > > > >> After removing this, we are able to see a performance gain but not= as good > > > >> as raw socket. > > > >> =20 > > > >=20 > > > > You're using an unordered_map to map your buffer pointers back to t= he > > > > mbufs. While it may not do a memcpy all the time, It will likely en= d > > > > up doing a malloc arbitrarily when you insert or remove entries fro= m > > > > the map. If it needs to resize the table, it'll be even worse. You = may > > > > want to consider using librte_hash: > > > > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better= , > > > > see if you can design the system to avoid needing to do a lookup li= ke > > > > this. Can you return a handle with the mbuf pointer and the data > > > > together? > > > >=20 > > > > You're also using floating point math where it's unnecessary (the > > > > timing check). Just multiply the numerator by 1000000 prior to doin= g > > > > the division. I doubt you'll overflow a uint64_t with that. It's no= t > > > > as efficient as integer math, though I'm not sure offhand it'd caus= e a > > > > major perf problem. > > > >=20 > > > > One final thing: using a raw socket, the kernel will take over > > > > transmitting and receiving to the NIC itself. that means it is free= to > > > > use multiple CPUs for the rx and tx. I notice that you only have on= e > > > > rx/tx queue, meaning at most one CPU can send and receive packets. > > > > When running your performance test with the raw socket, you may wan= t > > > > to see how busy the system is doing packet sends and receives. Is i= t > > > > using more than one CPU's worth of processing? Is it using less, bu= t > > > > when combined with your main application's usage, the overall syste= m > > > > is still using more than one? =20 > > >=20 > > > Along with the floating point math, I would remove all floating point= math and use the rte_rdtsc() function to use cycles. Using something like: > > >=20 > > > uint64_t cur_tsc, next_tsc, timo =3D (rte_timer_get_hz() / 16); /* = One 16th of a second use 2/4/8/16/32 power of two numbers to make the math = simple divide */ > > >=20 > > > cur_tsc =3D rte_rdtsc(); > > >=20 > > > next_tsc =3D cur_tsc + timo; /* Now next_tsc the next time to flush *= / > > >=20 > > > while(1) { > > > cur_tsc =3D rte_rdtsc(); > > > if (cur_tsc >=3D next_tsc) { > > > flush(); > > > next_tsc +=3D timo; > > > } > > > /* Do other stuff */ > > > } > > >=20 > > > For the m_bufPktMap I would use the rte_hash or do not use a hash at = all by grabbing the buffer address and subtract the > > > mbuf =3D (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) = + RTE_MAX_HEADROOM); > > >=20 > > >=20 > > > DpdkNetDevice:Write(uint8_t *buffer, size_t length) > > > { > > > struct rte_mbuf *pkt; > > > uint64_t cur_tsc; > > >=20 > > > pkt =3D (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct = rte_mbuf) + RTE_MAX_HEADROOM); > > >=20 > > > /* No need to test pkt, but buffer maybe tested to make sure = it is not null above the math above */ > > >=20 > > > pkt->pk_len =3D length; > > > pkt->data_len =3D length; > > >=20 > > > rte_eth_tx_buffer(m_portId, 0, m_txBuffer, pkt); > > >=20 > > > cur_tsc =3D rte_rdtsc(); > > >=20 > > > /* next_tsc is a private variable */ > > > if (cur_tsc >=3D next_tsc) { > > > rte_eth_tx_buffer_flush(m_portId, 0, m_txBuffer); = /* hardcoded the queue id, should be fixed */ > > > next_tsc =3D cur_tsc + timo; /* timo is a fixed numbe= r of cycles to wait */ > > > } > > > return length; > > > } > > >=20 > > > DpdkNetDevice::Read() > > > { > > > struct rte_mbuf *pkt; > > >=20 > > > if (m_rxBuffer->length =3D=3D 0) { > > > m_rxBuffer->next =3D 0; > > > m_rxBuffer->length =3D rte_eth_rx_burst(m_portId, 0, = m_rxBuffer->pmts, MAX_PKT_BURST); > > >=20 > > > if (m_rxBuffer->length =3D=3D 0) > > > return std::make_pair(NULL, -1); > > > } > > >=20 > > > pkt =3D m_rxBuffer->pkts[m_rxBuffer->next++]; > > >=20 > > > /* do not use rte_pktmbuf_read() as it does a copy for the co= mplete packet */ > > >=20 > > > return std:make_pair(rte_pktmbuf_mtod(pkt, char *), pkt->pkt_= len); > > > } > > >=20 > > > void > > > DpdkNetDevice::FreeBuf(uint8_t *buf) > > > { > > > struct rte_mbuf *pkt; > > >=20 > > > if (!buf) > > > return; > > > pkt =3D (struct rte_mbuf *)RTE_PKT_SUB(buf, sizeof(rte_mbuf) = + RTE_MAX_HEADROOM); > > >=20 > > > rte_pktmbuf_free(pkt); > > > } > > >=20 > > > When your code is done with the buffer, then convert the buffer addre= ss back to a rte_mbuf pointer and call rte_pktmbuf_free(pkt); This should e= liminate the copy and floating point code. Converting my C code to C++ pric= eless :-) > > >=20 > > > Hopefully the buffer address passed is the original buffer address an= d has not be adjusted. > > >=20 > > >=20 > > > Regards, > > > Keith > > > =20 > >=20 > > Regards, > > Keith > >=20 >=20 > Also rdtsc causes cpu to stop doing any look ahead, so there is a heisenb= erg effect. > Adding more rdtsc will hurt performance. It also looks like your code is= not doing bursting correctly. > What if multiple packets arrive in one rx_burst? Regards, Keith