From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by dpdk.org (Postfix) with ESMTP id 2CFA01B60F for ; Sun, 25 Nov 2018 05:35:49 +0100 (CET) Received: by mail-pf1-f196.google.com with SMTP id b85so4995404pfc.3 for ; Sat, 24 Nov 2018 20:35:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I+vX3JdOuCSMCeBKUwGy96iGfK+sEpX6aACl6nTZ334=; b=YqVFHfOS4/S3KlwN4AOwWZZmG7rZz9LbTv90t+iXyT4zWWwBDQ/eBio+id3GiotaXB sRwFIJYxQ4nRpCXO8sbsoY2zhRxgxUnwUgv26Ia4v8NaVDjaBCWkrYAC5lNCSHLuCE+D Ycjns9RgmNrbQxEmgNw69q6iQtDn5FQk35QkuInh37Sa0p5kOkOpZgAlYtQpe/k60gbC N2rjsKXPKMTDew2P/3sHAA9/ZM9oSOqVLkwt1STYEU14t9R0iybTbcjl9xBUCZXtZ3qG 2Y9LuF3aM08bGlRYIObRjqfhtk1CioHBedGVJumvtr5jDI7vuqyCxKtgGi7ewVRywAcD 5B7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I+vX3JdOuCSMCeBKUwGy96iGfK+sEpX6aACl6nTZ334=; b=eftEaqC6rME72TNAQeauIve9CPgKnIV7QZDrG8VNTT8nV8DBKKlG4jzkOEu1KsOWLn 0G9UX/NMsCecTpd390n0vd0rR1so6s0stR05UIaexZSX/4ZjLE1M8fghsC5NtSqXaGma EovOObZfUe56q8N3w7NGRlP1Bd1v1kAjj7FHLCU0crt9yz4x/Ej5f228t0cz3yLI5xHW w6PnKAmmYF7dMgDHjV4WPPZLXuaJRs3/E6NRJdNgku414QzlN59I6++frhe39i2HIbak QMlY3dfjU5CTn1pOvA0+Un9+tMoOWnuXc0mk5eJj2/B0zNgwx4PJgOmbYp4CWFDV4CEZ Q+JQ== X-Gm-Message-State: AA+aEWbyLjBPfpkITrmpRs3mEXBn2jOOXaFuMnH9Z90b4scLrq2SYFgE FHR56zLBYWCHDun8APpDnNuAtQ== X-Google-Smtp-Source: AFSGD/Ww+sru1Lt53IXrgLCY+4HgXfK6XL9aQGz2Tm8Ew7ZfTZlO1+OPaJFsUVC8PhyQI1ltKRwh7g== X-Received: by 2002:a62:178f:: with SMTP id 137mr6968286pfx.226.1543120548944; Sat, 24 Nov 2018 20:35:48 -0800 (PST) Received: from xeon-e3 (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id m129-v6sm69091209pfm.78.2018.11.24.20.35.48 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 24 Nov 2018 20:35:48 -0800 (PST) Date: Sat, 24 Nov 2018 20:35:41 -0800 From: Stephen Hemminger To: "Wiles, Keith" Cc: Harsh Patel , Kyle Larose , "users@dpdk.org" Message-ID: <20181124203541.4aa9bbf2@xeon-e3> In-Reply-To: <34E92C48-A90C-472C-A915-AAA4A6B5CDE8@intel.com> References: <71CBA720-633D-4CFE-805C-606DAAEDD356@intel.com> <3C60E59D-36AD-4382-8CC3-89D4EEB0140D@intel.com> <76959924-D9DB-4C58-BB05-E33107AD98AC@intel.com> <485F0372-7486-473B-ACDA-F42A2D86EF03@intel.com> <34E92C48-A90C-472C-A915-AAA4A6B5CDE8@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-users] Query on handling packets X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Nov 2018 04:35:50 -0000 On Sat, 24 Nov 2018 16:01:04 +0000 "Wiles, Keith" wrote: > > On Nov 22, 2018, at 9:54 AM, Harsh Patel wrote: > > > > Hi > > > > Thank you so much for the reply and for the solution. > > > > We used the given code. We were amazed by the pointer arithmetic you used, got to learn something new. > > > > But still we are under performing.The same bottleneck of ~2.5Mbps is seen. > > > > We also checked if the raw socket was using any extra (logical) cores than the DPDK. We found that raw socket has 2 logical threads running on 2 logical CPUs. Whereas, the DPDK version has 6 logical threads on 2 logical CPUs. We also ran the 6 threads on 4 logical CPUs, still we see the same bottleneck. > > > > We have updated our code (you can use the same links from previous mail). It would be helpful if you could help us in finding what causes the bottleneck. > > I looked at the code for a few seconds and noticed your TX_TIMEOUT is macro that calls (rte_get_timer_hz()/2014) just to be safe I would not call rte_get_timer_hz() time, but grab the value and store the hz locally and use that variable instead. This will not improve performance is my guess and I would have to look at the code the that routine to see if it buys you anything to store the value locally. If the getting hz is just a simple read of a variable then good, but still you should should a local variable within the object to hold the (rte_get_timer_hz()/2048) instead of doing the call and divide each time. > > > > > Thanks and Regards, > > Harsh and Hrishikesh > > > > > > On Mon, Nov 19, 2018, 19:19 Wiles, Keith wrote: > > > > > > > On Nov 17, 2018, at 4:05 PM, Kyle Larose wrote: > > > > > > On Sat, Nov 17, 2018 at 5:22 AM Harsh Patel wrote: > > >> > > >> Hello, > > >> Thanks a lot for going through the code and providing us with so much > > >> information. > > >> We removed all the memcpy/malloc from the data path as you suggested and > > > ... > > >> After removing this, we are able to see a performance gain but not as good > > >> as raw socket. > > >> > > > > > > You're using an unordered_map to map your buffer pointers back to the > > > mbufs. While it may not do a memcpy all the time, It will likely end > > > up doing a malloc arbitrarily when you insert or remove entries from > > > the map. If it needs to resize the table, it'll be even worse. You may > > > want to consider using librte_hash: > > > https://doc.dpdk.org/api/rte__hash_8h.html instead. Or, even better, > > > see if you can design the system to avoid needing to do a lookup like > > > this. Can you return a handle with the mbuf pointer and the data > > > together? > > > > > > You're also using floating point math where it's unnecessary (the > > > timing check). Just multiply the numerator by 1000000 prior to doing > > > the division. I doubt you'll overflow a uint64_t with that. It's not > > > as efficient as integer math, though I'm not sure offhand it'd cause a > > > major perf problem. > > > > > > One final thing: using a raw socket, the kernel will take over > > > transmitting and receiving to the NIC itself. that means it is free to > > > use multiple CPUs for the rx and tx. I notice that you only have one > > > rx/tx queue, meaning at most one CPU can send and receive packets. > > > When running your performance test with the raw socket, you may want > > > to see how busy the system is doing packet sends and receives. Is it > > > using more than one CPU's worth of processing? Is it using less, but > > > when combined with your main application's usage, the overall system > > > is still using more than one? > > > > Along with the floating point math, I would remove all floating point math and use the rte_rdtsc() function to use cycles. Using something like: > > > > uint64_t cur_tsc, next_tsc, timo = (rte_timer_get_hz() / 16); /* One 16th of a second use 2/4/8/16/32 power of two numbers to make the math simple divide */ > > > > cur_tsc = rte_rdtsc(); > > > > next_tsc = cur_tsc + timo; /* Now next_tsc the next time to flush */ > > > > while(1) { > > cur_tsc = rte_rdtsc(); > > if (cur_tsc >= next_tsc) { > > flush(); > > next_tsc += timo; > > } > > /* Do other stuff */ > > } > > > > For the m_bufPktMap I would use the rte_hash or do not use a hash at all by grabbing the buffer address and subtract the > > mbuf = (struct rte_mbuf *)RTE_PTR_SUB(buf, sizeof(struct rte_mbuf) + RTE_MAX_HEADROOM); > > > > > > DpdkNetDevice:Write(uint8_t *buffer, size_t length) > > { > > struct rte_mbuf *pkt; > > uint64_t cur_tsc; > > > > pkt = (struct rte_mbuf *)RTE_PTR_SUB(buffer, sizeof(struct rte_mbuf) + RTE_MAX_HEADROOM); > > > > /* No need to test pkt, but buffer maybe tested to make sure it is not null above the math above */ > > > > pkt->pk_len = length; > > pkt->data_len = length; > > > > rte_eth_tx_buffer(m_portId, 0, m_txBuffer, pkt); > > > > cur_tsc = rte_rdtsc(); > > > > /* next_tsc is a private variable */ > > if (cur_tsc >= next_tsc) { > > rte_eth_tx_buffer_flush(m_portId, 0, m_txBuffer); /* hardcoded the queue id, should be fixed */ > > next_tsc = cur_tsc + timo; /* timo is a fixed number of cycles to wait */ > > } > > return length; > > } > > > > DpdkNetDevice::Read() > > { > > struct rte_mbuf *pkt; > > > > if (m_rxBuffer->length == 0) { > > m_rxBuffer->next = 0; > > m_rxBuffer->length = rte_eth_rx_burst(m_portId, 0, m_rxBuffer->pmts, MAX_PKT_BURST); > > > > if (m_rxBuffer->length == 0) > > return std::make_pair(NULL, -1); > > } > > > > pkt = m_rxBuffer->pkts[m_rxBuffer->next++]; > > > > /* do not use rte_pktmbuf_read() as it does a copy for the complete packet */ > > > > return std:make_pair(rte_pktmbuf_mtod(pkt, char *), pkt->pkt_len); > > } > > > > void > > DpdkNetDevice::FreeBuf(uint8_t *buf) > > { > > struct rte_mbuf *pkt; > > > > if (!buf) > > return; > > pkt = (struct rte_mbuf *)RTE_PKT_SUB(buf, sizeof(rte_mbuf) + RTE_MAX_HEADROOM); > > > > rte_pktmbuf_free(pkt); > > } > > > > When your code is done with the buffer, then convert the buffer address back to a rte_mbuf pointer and call rte_pktmbuf_free(pkt); This should eliminate the copy and floating point code. Converting my C code to C++ priceless :-) > > > > Hopefully the buffer address passed is the original buffer address and has not be adjusted. > > > > > > Regards, > > Keith > > > > Regards, > Keith > Also rdtsc causes cpu to stop doing any look ahead, so there is a heisenberg effect. Adding more rdtsc will hurt performance. It also looks like your code is not doing bursting correctly. What if multiple packets arrive in one rx_burst?