From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from alln-iport-1.cisco.com (alln-iport-1.cisco.com [173.37.142.88]) by dpdk.org (Postfix) with ESMTP id 3990EA2F for ; Tue, 14 Feb 2017 13:32:02 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1940; q=dns/txt; s=iport; t=1487075523; x=1488285123; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=JzELXhreqFNeLgHDCun/pbXclqM1EYRLJIqqgP7xCrM=; b=aY1kcN9cQun7hx8OKZi1hdIHoBCFjqbxSQAbcXSWIBUlKf06VN+Ww41q akaGKRACtI4nV3JFMSSYn8E816Cw5vbekB+xHImXAKyLeuFl6ROlgwNL2 cb8a+4DyqYsOke6eLroY0oMvcex1x79NjmgMPZVE8/LwR4aoLo0SEZozZ s=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0DHAgBb+KJY/5RdJa1UChkBAQEBAQEBA?= =?us-ascii?q?QEBAQcBAQEBAYNSgWoHn2iTJ4IPggyGIgKBckAXAQIBAQEBAQEBYiiEaQEBAQQ?= =?us-ascii?q?nEz8MBAIBCBEEAQEfCQcyFAkIAgQBDQUIiWOwbTqLXAEBAQEBAQEBAQEBAQEBA?= =?us-ascii?q?QEBAQEBAR2GTIRvhCyGDQEEm3IBkgqCBI8KiCyKaAEgATY8RFEVhwB1iRQBgQs?= =?us-ascii?q?BAQE?= X-IronPort-AV: E=Sophos;i="5.35,161,1484006400"; d="scan'208";a="385398272" Received: from rcdn-core-12.cisco.com ([173.37.93.148]) by alln-iport-1.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Feb 2017 12:31:57 +0000 Received: from XCH-RTP-014.cisco.com (xch-rtp-014.cisco.com [64.101.220.154]) by rcdn-core-12.cisco.com (8.14.5/8.14.5) with ESMTP id v1ECVv9m004108 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Tue, 14 Feb 2017 12:31:57 GMT Received: from xch-rtp-017.cisco.com (64.101.220.157) by XCH-RTP-014.cisco.com (64.101.220.154) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Tue, 14 Feb 2017 07:31:56 -0500 Received: from xch-rtp-017.cisco.com ([64.101.220.157]) by XCH-RTP-017.cisco.com ([64.101.220.157]) with mapi id 15.00.1210.000; Tue, 14 Feb 2017 07:31:56 -0500 From: "Hanoch Haim (hhaim)" To: "Mcnamara, John" , "dev@dpdk.org" CC: "Ido Barnea (ibarnea)" , "Hanoch Haim (hhaim)" Thread-Topic: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10% Thread-Index: AQHShr5UQEkwtpRcVE67dFUqH9EeBg== Date: Tue, 14 Feb 2017 12:31:56 +0000 Message-ID: <6ee7449acb434fafb80c5cb1b970be15@XCH-RTP-017.cisco.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [64.103.125.60] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10% X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Feb 2017 12:32:03 -0000 Hi John, thank you for the fast response.=20 I assume that Intel tests are more like rx->tx tests.=20 In our case we are doing mostly tx, which is more similar to dpdk-pkt-gen=20 The cases that we cached the mbuf was affected the most.=20 We expect to see the same issue with a simple DPDK application Thanks, Hanoh -----Original Message----- From: Mcnamara, John [mailto:john.mcnamara@intel.com]=20 Sent: Tuesday, February 14, 2017 2:19 PM To: Hanoch Haim (hhaim); dev@dpdk.org Cc: Ido Barnea (ibarnea) Subject: RE: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10% > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hanoch Haim=20 > (hhaim) > Sent: Tuesday, February 14, 2017 11:45 AM > To: dev@dpdk.org > Cc: Ido Barnea (ibarnea) ; Hanoch Haim (hhaim)=20 > > Subject: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10% >=20 > Hi, >=20 > We (trex traffic generator project) upgrade DPDK version from 16.07 to > 17.02arc-3 and we experienced a performance degradation on the=20 > following > NICS: >=20 > XL710 : 10-15% > ixgbe : 8% in one case > mlx5 : 8% 2 case > X710 : no impact (same driver as XL710) > VIC : no impact >=20 > It might be related to DPDK infra change as it affects more than one=20 > driver (maybe mbuf?). > Wanted to know if this is expected before investing more into this.=20 > The Y axis numbers in all the following charts (from Grafana) are=20 > MPPS/core which means how many cycles per packet the CPU invest.=20 > Bigger MPPS/core means less CPU cycles to send packet. Hi, Thanks for the update. From a quick check with the Intel test team they hav= en't seen this. They would have flagged it if they had. Maybe someone from = Mellanox/Netronome could confirm as well. Could you do a git-bisect to identify the change that caused this? John