From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from jaguar.aricent.com (jaguar.aricent.com [121.241.96.11]) by dpdk.org (Postfix) with ESMTP id 75FC5156 for ; Sat, 2 Nov 2013 06:33:10 +0100 (CET) Received: from jaguar.aricent.com (localhost [127.0.0.1]) by postfix.imss71 (Postfix) with ESMTP id D782F36B74; Sat, 2 Nov 2013 11:03:44 +0530 (IST) Received: from GUREXHT02.ASIAN.AD.ARICENT.COM (gurexht02.asian.ad.aricent.com [10.203.171.138]) by jaguar.aricent.com (Postfix) with ESMTP id C0EF936B32; Sat, 2 Nov 2013 11:03:44 +0530 (IST) Received: from GUREXMB01.asian.ad.aricent.com ([10.203.171.134]) by GUREXHT02.ASIAN.AD.ARICENT.COM ([10.203.171.138]) with mapi; Sat, 2 Nov 2013 11:03:45 +0530 From: Prashant Upadhyaya To: "Wang, Shawn" , Alexander Belyakov Date: Sat, 2 Nov 2013 11:02:31 +0530 Thread-Topic: [dpdk-dev] Surprisingly high TCP ACK packets drop counter Thread-Index: AQHO1whjRxNd259ED0ycA7IZkDyueZoQdm4AgAD05fA= Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-TM-AS-MML: No Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Surprisingly high TCP ACK packets drop counter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 02 Nov 2013 05:33:10 -0000 Hi, I have used DPDK1.4 and DPDK1.5 and the packets do fan out nicely on the rx= queues nicely in some usecases I have. Alexander, can you please try using DPDK1.4 or 1.5 and share the results. Regards -Prashant -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, Shawn Sent: Friday, November 01, 2013 8:24 PM To: Alexander Belyakov Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Surprisingly high TCP ACK packets drop counter Hi: We had the same problem before. It turned out that RSC (receive side coalescing) is enabled by default in DPDK. So we write this na=EFve patch t= o disable it. This patch is based on DPDK 1.3. Not sure 1.5 has changed it = or not. After this patch, ACK rate should go back to 14.5Mpps. For details, you can= refer to Intel=AE 82599 10 GbE Controller Datasheet. (7.11 Receive Side Co= alescing). From: xingbow Date: Wed, 21 Aug 2013 11:35:23 -0700 Subject: [PATCH] Disable RSC in ixgbe_dev_rx_init function in file ixgbe_rxtx.c --- DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h | 2 +- DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h index 7fffd60..f03046f 100644 --- a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h +++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h @@ -1930,7 +1930,7 @@ enum { #define IXGBE_RFCTL_ISCSI_DIS 0x00000001 #define IXGBE_RFCTL_ISCSI_DWC_MASK 0x0000003E #define IXGBE_RFCTL_ISCSI_DWC_SHIFT 1 -#define IXGBE_RFCTL_RSC_DIS 0x00000010 +#define IXGBE_RFCTL_RSC_DIS 0x00000020 #define IXGBE_RFCTL_NFSW_DIS 0x00000040 #define IXGBE_RFCTL_NFSR_DIS 0x00000080 #define IXGBE_RFCTL_NFS_VER_MASK 0x00000300 diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index 07830b7..ba6e05d 100755 --- a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -3007,6 +3007,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) uint64_t bus_addr; uint32_t rxctrl; uint32_t fctrl; + uint32_t rfctl; uint32_t hlreg0; uint32_t maxfrs; uint32_t srrctl; @@ -3033,6 +3034,12 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) fctrl |=3D IXGBE_FCTRL_PMCF; IXGBE_WRITE_REG(hw, IXGBE_FCTRL, fctrl); + /* Disable RSC */ + RTE_LOG(INFO, PMD, "Disable RSC\n"); + rfctl =3D IXGBE_READ_REG(hw, IXGBE_RFCTL); + rfctl |=3D IXGBE_RFCTL_RSC_DIS; + IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl); + /* * Configure CRC stripping, if any. */ -- Thanks. Wang, Xingbo On 11/1/13 6:43 AM, "Alexander Belyakov" wrote: >Hello, > >we have simple test application on top of DPDK which sole purpose is to >forward as much packets as possible. Generally we easily achieve >14.5Mpps with two 82599EB (one as input and one as output). The only >suprising exception is forwarding pure TCP ACK flood when performace >always drops to approximately 7Mpps. > >For simplicity consider two different types of traffic: >1) TCP SYN flood is forwarded at 14.5Mpps rate, >2) pure TCP ACK flood is forwarded only at 7Mpps rate. > >Both SYN and ACK packets have exactly the same length. > >It is worth to mention, this forwarding application looks at Ethernet >and IP headers, but never deals with L4 headers. > >We tracked down issue to RX circuit. To be specific, there are 4 RX >queues initialized on input port and rte_eth_stats_get() shows uniform >packet distribution (q_ipackets) among them, while q_errors remain zero >for all queues. The only drop counter quickly increasing in the case of >pure ACK flood is ierrors, while rx_nombuf remains zero. > >We tried different kinds of traffic generators, but always got the same >result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK >flag bit set while all other flag bits dropped. Source IPs and ports >are selected randomly. > >Please let us know if anyone is aware of such strange behavior and >where should we look at to narrow down the problem. > >Thanks in advance, >Alexander Belyakov =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D Please refer to http://www.aricent.com/legal/email_disclaimer.html for important disclosures regarding this electronic communication. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D