From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 60B872BF3 for ; Wed, 30 Aug 2017 15:36:02 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Aug 2017 06:32:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,448,1498546800"; d="scan'208";a="1212503395" Received: from irsmsx154.ger.corp.intel.com ([163.33.192.96]) by fmsmga002.fm.intel.com with ESMTP; 30 Aug 2017 06:32:25 -0700 Received: from irsmsx111.ger.corp.intel.com (10.108.20.4) by IRSMSX154.ger.corp.intel.com (163.33.192.96) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 30 Aug 2017 14:32:24 +0100 Received: from irsmsx101.ger.corp.intel.com ([169.254.1.22]) by irsmsx111.ger.corp.intel.com ([169.254.2.30]) with mapi id 14.03.0319.002; Wed, 30 Aug 2017 14:32:24 +0100 From: "Kavanagh, Mark B" To: "Ananyev, Konstantin" , "Hu, Jiayu" CC: "dev@dpdk.org" , "Tan, Jianfeng" Thread-Topic: [PATCH 0/5] Support TCP/IPv4, VxLAN and GRE GSO in DPDK Thread-Index: AQHTIWJ2/2nqlXkfmkepRawJfqr+haKcp7aAgAA9IsA= Date: Wed, 30 Aug 2017 13:32:23 +0000 Message-ID: References: <1503584144-63181-1-git-send-email-jiayu.hu@intel.com> <2601191342CEEE43887BDE71AB9772584F23E07B@IRSMSX103.ger.corp.intel.com> <20170830073656.GA79301@dpdk15.sh.intel.com> <2601191342CEEE43887BDE71AB9772584F23E240@IRSMSX103.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772584F23E240@IRSMSX103.ger.corp.intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ctpclassification: CTP_IC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiY2M3ZTQ3ZGItYzcwYS00YTRlLWIxMGItNTIwNGEwYmNjOTRmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Im02REcwTHhSQThIWUJJdzZMRU1xMUgxYjZCWEI2QW5JbVp2WncxQVVLelk9In0= dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 0/5] Support TCP/IPv4, VxLAN and GRE GSO in DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Aug 2017 13:36:03 -0000 >From: Ananyev, Konstantin >Sent: Wednesday, August 30, 2017 11:49 AM >To: Hu, Jiayu >Cc: dev@dpdk.org; Kavanagh, Mark B ; Tan, Jianf= eng > >Subject: RE: [PATCH 0/5] Support TCP/IPv4, VxLAN and GRE GSO in DPDK > > > >> -----Original Message----- >> From: Hu, Jiayu >> Sent: Wednesday, August 30, 2017 8:37 AM >> To: Ananyev, Konstantin >> Cc: dev@dpdk.org; Kavanagh, Mark B ; Tan, >Jianfeng >> Subject: Re: [PATCH 0/5] Support TCP/IPv4, VxLAN and GRE GSO in DPDK >> >> Hi Konstantin, >> >> Thanks for your suggestions. Feedbacks are inline. >> >> Thanks, >> Jiayu >> >> On Wed, Aug 30, 2017 at 09:37:42AM +0800, Ananyev, Konstantin wrote: >> > >> > Hi Jiayu, >> > Few questions/comments from me below in in next few mails. >> > Thanks >> > Konstantin >> > >> > > >> > > Generic Segmentation Offload (GSO) is a SW technique to split large >> > > packets into small ones. Akin to TSO, GSO enables applications to >> > > operate on large packets, thus reducing per-packet processing overhe= ad. >> > > >> > > To enable more flexibility to applications, DPDK GSO is implemented >> > > as a standalone library. Applications explicitly use the GSO library >> > > to segment packets. This patch adds GSO support to DPDK for specific >> > > packet types: specifically, TCP/IPv4, VxLAN, and GRE. >> > > >> > > The first patch introduces the GSO API framework. The second patch >> > > adds GSO support for TCP/IPv4 packets (containing an optional VLAN >> > > tag). The third patch adds GSO support for VxLAN packets that contai= n >> > > outer IPv4, and inner TCP/IPv4 headers (plus optional inner and/or >> > > outer VLAN tags). The fourth patch adds GSO support for GRE packets >> > > that contain outer IPv4, and inner TCP/IPv4 headers (with optional >> > > outer VLAN tag). The last patch in the series enables TCP/IPv4, VxLA= N, >> > > and GRE GSO in testpmd's checksum forwarding engine. >> > > >> > > The performance of TCP/IPv4 GSO on a 10Gbps link is demonstrated usi= ng >> > > iperf. Setup for the test is described as follows: >> > > >> > > a. Connect 2 x 10Gbps physical ports (P0, P1), together physically. >> > > b. Launch testpmd with P0 and a vhost-user port, and use csum >> > > forwarding engine. >> > > c. Select IP and TCP HW checksum calculation for P0; select TCP HW >> > > checksum calculation for vhost-user port. >> > > d. Launch a VM with csum and tso offloading enabled. >> > > e. Run iperf-client on virtio-net port in the VM to send TCP packets= . >> > >> > Not sure I understand the setup correctly: >> > So testpmd forwards packets between P0 and vhost-user port, right? >> >> Yes. >> >> > And who uses P1? iperf-server over linux kernel? >> >> P1 is possessed by linux kernel. >> >> > Also is P1 on another box or not? >> >> P0 and P1 are in the same machine and are connected physically. >> >> > >> > > >> > > With GSO enabled for P0 in testpmd, observed iperf throughput is ~9G= bps. >> > >> > Ok, and if GSO is disabled what is the throughput? >> > Another stupid question: if P0 is physical 10G (ixgbe?) we can just en= able >a TSO on it, right? >> > If so, what would be the TSO numbers here? >> >> Here are more detailed experiment information: >> >> test1: only enable GSO for p0, GSO size is 1518, use two iperf-clients (= i.e. >"-P 2") >> test2: only enable TSO for p0, TSO size is 1518, use two iperf-clients >> test3: disable TSO and GSO, use two iperf-clients >> >> test1 performance: 8.6Gpbs >> test2 throughput: 9.5Gbps >> test3 throughput: 3Mbps > >Ok thanks for detailed explanation. >I' d suggest you put it into next version cover letter. Thanks Konstantin - will do. > >> >> > >> > In fact, could you probably explain a bit more, what supposed to be a = main >usage model for that library? >> >> The GSO library is just a SW segmentation method, which can be used by >applications, like OVS. >> Currently, most of NICs supports to segment TCP and UDP packets, but not= for >all NICs. So current >> OVS doesn't enable TSO, as a result of lacking a SW segmentation fallbac= k. >Besides, the protocol >> types in HW segmentation are limited. So it's necessary to provide a SW >segmentation solution. >> >> With the GSO library, OVS and other applications are able to receive lar= ge >packets from VMs and >> process these large packets, instead of standard ones (i.e. 1518B). So t= he >per-packet overhead is >> reduced, since the number of packets needed processing is much fewer. > >Ok, just for my curiosity what is the size of the packets coming from VM? >Konstantin In the case of TSO (and as a corollary, GSO), I guess that the packet size = is bounded to ~64k. In OvS, that packet is dequeued using the rte_vhost_deq= ueue_burst API, and stored in an mbuf chain. The data capacity of mbufs in = OvS is user-defined, up to a limit of 9728B. Thanks, Mark > > >> >> > Is that to perform segmentation on (virtual) devices that doesn't supp= ort >HW TSO or ...? >> >> When launch qemu with enabling TSO or GSO, the virtual device doesn't re= ally >do segmentation. >> It directly sends large packets. Therefore, testpmd can receive large >packets from the VM and >> then perform GSO. The GSO/TSO behavior of virtual devices is different f= rom >physical NICs. >> >> > Again would it be for a termination point (packets were just formed an= d >filled) by the caller, >> > or is that for box in the middle which just forwards packets between >nodes? >> > If the later one, then we'll probably already have most of our packets >segmented properly, no? >> > >> > > The experimental data of VxLAN and GRE will be shown later. >> > > >> > > Jiayu Hu (3): >> > > lib: add Generic Segmentation Offload API framework >> > > gso/lib: add TCP/IPv4 GSO support >> > > app/testpmd: enable TCP/IPv4, VxLAN and GRE GSO >> > > >> > > Mark Kavanagh (2): >> > > lib/gso: add VxLAN GSO support >> > > lib/gso: add GRE GSO support >> > > >> > > app/test-pmd/cmdline.c | 121 +++++++++ >> > > app/test-pmd/config.c | 25 ++ >> > > app/test-pmd/csumonly.c | 68 ++++- >> > > app/test-pmd/testpmd.c | 9 + >> > > app/test-pmd/testpmd.h | 10 + >> > > config/common_base | 5 + >> > > lib/Makefile | 2 + >> > > lib/librte_eal/common/include/rte_log.h | 1 + >> > > lib/librte_gso/Makefile | 52 ++++ >> > > lib/librte_gso/gso_common.c | 431 >++++++++++++++++++++++++++++++++ >> > > lib/librte_gso/gso_common.h | 180 +++++++++++++ >> > > lib/librte_gso/gso_tcp.c | 82 ++++++ >> > > lib/librte_gso/gso_tcp.h | 73 ++++++ >> > > lib/librte_gso/gso_tunnel.c | 62 +++++ >> > > lib/librte_gso/gso_tunnel.h | 46 ++++ >> > > lib/librte_gso/rte_gso.c | 100 ++++++++ >> > > lib/librte_gso/rte_gso.h | 122 +++++++++ >> > > lib/librte_gso/rte_gso_version.map | 7 + >> > > mk/rte.app.mk | 1 + >> > > 19 files changed, 1392 insertions(+), 5 deletions(-) >> > > create mode 100644 lib/librte_gso/Makefile >> > > create mode 100644 lib/librte_gso/gso_common.c >> > > create mode 100644 lib/librte_gso/gso_common.h >> > > create mode 100644 lib/librte_gso/gso_tcp.c >> > > create mode 100644 lib/librte_gso/gso_tcp.h >> > > create mode 100644 lib/librte_gso/gso_tunnel.c >> > > create mode 100644 lib/librte_gso/gso_tunnel.h >> > > create mode 100644 lib/librte_gso/rte_gso.c >> > > create mode 100644 lib/librte_gso/rte_gso.h >> > > create mode 100644 lib/librte_gso/rte_gso_version.map >> > > >> > > -- >> > > 2.7.4