From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id D93781B5FC for ; Sat, 24 Nov 2018 16:43:18 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Nov 2018 07:43:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,273,1539673200"; d="scan'208";a="102767380" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga003.jf.intel.com with ESMTP; 24 Nov 2018 07:43:17 -0800 Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.408.0; Sat, 24 Nov 2018 07:43:17 -0800 Received: from fmsmsx118.amr.corp.intel.com ([169.254.1.160]) by FMSMSX157.amr.corp.intel.com ([169.254.14.68]) with mapi id 14.03.0415.000; Sat, 24 Nov 2018 07:43:16 -0800 From: "Wiles, Keith" To: Harsh Patel CC: Kyle Larose , "users@dpdk.org" Thread-Topic: [dpdk-users] Query on handling packets Thread-Index: AQHUdzydRRkBFdv4fkKjO7j2RJcyb6VGGaEAgACGp4CAAAyagIABE0+AgAFRlQCAAnP3gIACAjmAgAC+mQCAAZQ9gIAAFqaAgARlHYCAAMRLgIACmg8AgATaFACAAyF5gA== Date: Sat, 24 Nov 2018 15:43:15 +0000 Message-ID: References: <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com> <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com> <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com> <485F0372-7486-473B-ACDA-F42A2D86EF03@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.255.228.125] Content-Type: text/plain; charset="us-ascii" Content-ID: <62664667B714624D8A7BBF6480EAE8C7@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Query on handling packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 24 Nov 2018 15:43:19 -0000 > On Nov 22, 2018, at 9:54 AM, Harsh Patel wrote= : >=20 > Hi >=20 > Thank you so much for the reply and for the solution. >=20 > We used the given code. We were amazed by the pointer arithmetic you used= , got to learn something new. >=20 > But still we are under performing.The same bottleneck of ~2.5Mbps is seen= . Make sure the cores you are using are on the same NUMA or socket the PCI de= vices are located. If you have two CPUs or sockets in your system. The cpu_layout.py script wi= ll help you understand the layout of the cores and/or lcores in the system. On my machine the PCI bus is connected to socket 1 and not socket 0, this m= eans I have to use lcores only on socket 1. Some systems have two PCI buses= one on each socket. Accessing data from one NUMA zone or socket to another= can effect performance and should be avoided. HTH >=20 > We also checked if the raw socket was using any extra (logical) cores tha= n the DPDK. We found that raw socket has 2 logical threads running on 2 log= ical CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical CPU= s. We also ran the 6 threads on 4 logical CPUs, still we see the same bottl= eneck. >=20 > We have updated our code (you can use the same links from previous mail).= It would be helpful if you could help us in finding what causes the bottle= neck. >=20 > Thanks and Regards,=20 > Harsh and Hrishikesh=20 >=20 >=20 > On Mon, Nov 19, 2018, 19:19 Wiles, Keith wrote: >=20 >=20 > > On Nov 17, 2018, at 4:05 PM, Kyle Larose wrote: > >=20 > > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel = wrote: > >>=20 > >> Hello, > >> Thanks a lot for going through the code and providing us with so much > >> information. > >> We removed all the memcpy/malloc from the data path as you suggested a= nd > > ... > >> After removing this, we are able to see a performance gain but not as = good > >> as raw socket. > >>=20 > >=20 > > You're using an unordered_map to map your buffer pointers back to the > > mbufs. While it may not do a memcpy all the time, It will likely end > > up doing a malloc arbitrarily when you insert or remove entries from > > the map. If it needs to resize the table, it'll be even worse. You may > > want to consider using librte_hash: > > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better, > > see if you can design the system to avoid needing to do a lookup like > > this. Can you return a handle with the mbuf pointer and the data > > together? > >=20 > > You're also using floating point math where it's unnecessary (the > > timing check). Just multiply the numerator by 1000000 prior to doing > > the division. I doubt you'll overflow a uint64_t with that. It's not > > as efficient as integer math, though I'm not sure offhand it'd cause a > > major perf problem. > >=20 > > One final thing: using a raw socket, the kernel will take over > > transmitting and receiving to the NIC itself. that means it is free to > > use multiple CPUs for the rx and tx. I notice that you only have one > > rx/tx queue, meaning at most one CPU can send and receive packets. > > When running your performance test with the raw socket, you may want > > to see how busy the system is doing packet sends and receives. Is it > > using more than one CPU's worth of processing? Is it using less, but > > when combined with your main application's usage, the overall system > > is still using more than one? >=20 > Along with the floating point math, I would remove all floating point mat= h and use the rte_rdtsc() function to use cycles. Using something like: >=20 > uint64_t cur_tsc, next_tsc, timo =3D (rte_timer_get_hz() / 16); /* One = 16th of a second use 2/4/8/16/32 power of two numbers to make the math simp= le divide */ >=20 > cur_tsc =3D rte_rdtsc(); >=20 > next_tsc =3D cur_tsc + timo; /* Now next_tsc the next time to flush */ >=20 > while(1) { > cur_tsc =3D rte_rdtsc(); > if (cur_tsc >=3D next_tsc) { > flush(); > next_tsc +=3D timo; > } > /* Do other stuff */ > } >=20 > For the m_bufPktMap I would use the rte_hash or do not use a hash at all = by grabbing the buffer address and subtract the > mbuf =3D (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) + RT= E_MAX_HEADROOM); >=20 >=20 > DpdkNetDevice:Write(uint8_t *buffer, size_t length) > { > struct rte_mbuf *pkt; > uint64_t cur_tsc; >=20 > pkt =3D (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct rte_= mbuf) + RTE_MAX_HEADROOM); >=20 > /* No need to test pkt, but buffer maybe tested to make sure it i= s not null above the math above */ >=20 > pkt->pk_len =3D length; > pkt->data_len =3D length; >=20 > rte_eth_tx_buffer(m_portId, 0, m_txBuffer, pkt); >=20 > cur_tsc =3D rte_rdtsc(); >=20 > /* next_tsc is a private variable */ > if (cur_tsc >=3D next_tsc) { > rte_eth_tx_buffer_flush(m_portId, 0, m_txBuffer); /= * hardcoded the queue id, should be fixed */ > next_tsc =3D cur_tsc + timo; /* timo is a fixed number of= cycles to wait */ > } > return length; > } >=20 > DpdkNetDevice::Read() > { > struct rte_mbuf *pkt; >=20 > if (m_rxBuffer->length =3D=3D 0) { > m_rxBuffer->next =3D 0; > m_rxBuffer->length =3D rte_eth_rx_burst(m_portId, 0, m_rx= Buffer->pmts, MAX_PKT_BURST); >=20 > if (m_rxBuffer->length =3D=3D 0) > return std::make_pair(NULL, -1); > } >=20 > pkt =3D m_rxBuffer->pkts[m_rxBuffer->next++]; >=20 > /* do not use rte_pktmbuf_read() as it does a copy for the comple= te packet */ >=20 > return std:make_pair(rte_pktmbuf_mtod(pkt, char *), pkt->pkt_len)= ; > } >=20 > void > DpdkNetDevice::FreeBuf(uint8_t *buf) > { > struct rte_mbuf *pkt; >=20 > if (!buf) > return; > pkt =3D (struct rte_mbuf *)RTE_PKT_SUB(buf, sizeof(rte_mbuf) + RT= E_MAX_HEADROOM); >=20 > rte_pktmbuf_free(pkt); > } >=20 > When your code is done with the buffer, then convert the buffer address b= ack to a rte_mbuf pointer and call rte_pktmbuf_free(pkt); This should elimi= nate the copy and floating point code. Converting my C code to C++ priceles= s :-) >=20 > Hopefully the buffer address passed is the original buffer address and ha= s not be adjusted. >=20 >=20 > Regards, > Keith >=20 Regards, Keith