From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 604F4DE3 for ; Tue, 24 Jan 2017 06:25:45 +0100 (CET) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP; 23 Jan 2017 21:25:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,277,1477983600"; d="scan'208";a="57436780" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga006.fm.intel.com with ESMTP; 23 Jan 2017 21:25:43 -0800 Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 23 Jan 2017 21:25:43 -0800 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.230]) by fmsmsx115.amr.corp.intel.com ([169.254.4.4]) with mapi id 14.03.0248.002; Mon, 23 Jan 2017 21:25:43 -0800 From: "Wiles, Keith" To: "Ananyev, Konstantin" CC: Stephen Hemminger , "Hu, Jiayu" , "dev@dpdk.org" , "Kinsella, Ray" , "Gilmore, Walter E" , "Venkatesan, Venky" , "yuanhan.liu@linux.intel.com" Thread-Topic: [dpdk-dev] [RFC] Add GRO support in DPDK Thread-Index: AQHSdXkcV6LeBFKVAEOr1Y1FPpUEfqFG09oAgABNfYCAAEA9AIAAPjGA Date: Tue, 24 Jan 2017 05:25:42 +0000 Message-ID: <1F520FF1-C38B-483B-95E1-FBD4C631E7D2@intel.com> References: <1485176592-111525-1-git-send-email-jiayu.hu@intel.com> <20170123091550.212dca35@xeon-e3> <6B5C6BED-CAD4-4C51-8FB7-8509663B813B@intel.com> <2601191342CEEE43887BDE71AB9772583F10AD94@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB9772583F10AD94@irsmsx105.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.252.141.98] Content-Type: text/plain; charset="us-ascii" Content-ID: <30AFB4330EEE4C4B82F05B1E4D14E3A9@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Jan 2017 05:25:45 -0000 > On Jan 23, 2017, at 6:43 PM, Ananyev, Konstantin wrote: >=20 >=20 >=20 >> -----Original Message----- >> From: Wiles, Keith >> Sent: Monday, January 23, 2017 9:53 PM >> To: Stephen Hemminger >> Cc: Hu, Jiayu ; dev@dpdk.org; Kinsella, Ray ; Ananyev, Konstantin >> ; Gilmore, Walter E ; Venkatesan, Venky ; >> yuanhan.liu@linux.intel.com >> Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK >>=20 >>=20 >>> On Jan 23, 2017, at 10:15 AM, Stephen Hemminger wrote: >>>=20 >>> On Mon, 23 Jan 2017 21:03:12 +0800 >>> Jiayu Hu wrote: >>>=20 >>>> With the support of hardware segmentation techniques in DPDK, the >>>> networking stack overheads of send-side of applications, which directl= y >>>> leverage DPDK, have been greatly reduced. But for receive-side, number= s of >>>> segmented packets seriously burden the networking stack of application= s. >>>> Generic Receive Offload (GRO) is a widely used method to solve the >>>> receive-side issue, which gains performance by reducing the amount of >>>> packets processed by the networking stack. But currently, DPDK doesn't >>>> support GRO. Therefore, we propose to add GRO support in DPDK, and thi= s >>>> RFC is used to explain the basic DPDK GRO design. >>>>=20 >>>> DPDK GRO is a SW-based packets assembly library, which provides GRO >>>> abilities for numbers of protocols. In DPDK GRO, packets are merged >>>> before returning to applications and after receiving from drivers. >>>>=20 >>>> In DPDK, GRO is a capability of NIC drivers. That support GRO or not a= nd >>>> what GRO types are supported are up to NIC drivers. Different drivers = may >>>> support different GRO types. By default, drivers enable all supported = GRO >>>> types. For applications, they can inquire the supported GRO types by >>>> each driver, and can control what GRO types are applied. For example, >>>> ixgbe supports TCP and UDP GRO, but the application just needs TCP GRO= . >>>> The application can disable ixgbe UDP GRO. >>>>=20 >>>> To support GRO, a driver should provide a way to tell applications wha= t >>>> GRO types are supported, and provides a GRO function, which is in char= ge >>>> of assembling packets. Since different drivers may support different G= RO >>>> types, their GRO functions may be different. For applications, they do= n't >>>> need extra operations to enable GRO. But if there are some GRO types t= hat >>>> are not needed, applications can use an API, like >>>> rte_eth_gro_disable_protocols, to disable them. Besides, they can >>>> re-enable the disabled ones. >>>>=20 >>>> The GRO function processes numbers of packets at a time. In each >>>> invocation, what GRO types are applied depends on applications, and th= e >>>> amount of packets to merge depends on the networking status and >>>> applications. Specifically, applications determine the maximum number = of >>>> packets to be processed by the GRO function, but how many packets are >>>> actually processed depends on if there are available packets to receiv= e. >>>> For example, the receive-side application asks the GRO function to >>>> process 64 packets, but the sender only sends 40 packets. At this time= , >>>> the GRO function returns after processing 40 packets. To reassemble th= e >>>> given packets, the GRO function performs an "assembly procedure" on ea= ch >>>> packet. We use an example to demonstrate this procedure. Supposing the >>>> GRO function is going to process packetX, it will do the following two >>>> things: >>>> a. Find a L4 assembly function according to the packet type of >>>> packetX. A L4 assembly function is in charge of merging packets of a >>>> specific type. For example, TCPv4 assembly function merges packets >>>> whose L3 IPv4 and L4 is TCP. Each L4 assembly function has a packet >>>> array, which keeps the packets that are unable to assemble. >>>> Initially, the packet array is empty; >>>> b. The L4 assembly function traverses own packet array to find a >>>> mergeable packet (comparing Ethernet, IP and L4 header fields). If >>>> finds, merges it and packetX via chaining them together; if doesn't, >>>> allocates a new array element to store packetX and updates element >>>> number of the array. >>>> After performing the assembly procedure to all packets, the GRO functi= on >>>> combines the results of all packet arrays, and returns these packets t= o >>>> applications. >>>>=20 >>>> There are lots of ways to implement the above design in DPDK. One of t= he >>>> ways is: >>>> a. Drivers tell applications what GRO types are supported via >>>> dev->dev_ops->dev_infos_get; >>>> b. When initialize, drivers register own GRO function as a RX >>>> callback, which is invoked inside rte_eth_rx_burst. The name of the >>>> GRO function should be like xxx_gro_receive (e.g. ixgbe_gro_receive). >>>> Currently, the RX callback can only process the packets returned by >>>> dev->rx_pkt_burst each time, and the maximum packet number >>>> dev->rx_pkt_burst returns is determined by each driver, which can't >>>> be interfered by applications. Therefore, to implement the above GRO >>>> design, we have to modify current RX implementation to make driver >>>> return packets as many as possible until the packet number meets the >>>> demand of applications or there are not available packets to receive. >>>> This modification is also proposed in patch: >>>> http://dpdk.org/ml/archives/dev/2017-January/055887.html; >>>> c. The GRO types to apply and the maximum number of packets to merge >>>> are passed by resetting RX callback parameters. It can be achieved by >>>> invoking rte_eth_rx_callback; >>>> d. Simply, we can just store packet addresses into the packet array. >>>> To check one element, we need to fetch the packet via its address. >>>> However, this simple design is not efficient enough. Since whenever >>>> checking one packet, one pointer dereference is generated. And a >>>> pointer dereference always causes a cache line miss. A better way is >>>> to store some rules in each array element. The rules must be the >>>> prerequisites of merging two packets, like the sequence number of TCP >>>> packets. We first compare the rules, then retrieve the packet if the >>>> rules match. If storing the rules causes the packet array structure >>>> is cache-unfriendly, we can store a fixed-length signature of the >>>> rules instead. For example, the signature can be calculated by >>>> performing XOR operation on IP addresses. Both design can avoid >>>> unnecessary pointer dereferences. >>>=20 >>>=20 >>> Since DPDK does burst mode already, GRO is a lot less relevant. >>> GRO in Linux was invented because there is no burst mode in the receive= API. >>>=20 >>> If you look at VPP in FD.io you will see they already do aggregration a= nd >>> steering at the higher level in the stack. >>>=20 >>> The point of GRO is that it is generic, no driver changes are necessary= . >>> Your proposal would add a lot of overhead, and cause drivers to have to >>> be aware of higher level flows. >>=20 >> NACK >>=20 >> The design is not super clear to me here and we need to understand the i= mpact to DPDK, performance and the application. I would like to >> have a clean transparent design to the application and as little impact = on performance as possible. >>=20 >> Let discuss this as I am not sure my previous concerns were addressed in= this RFC. >>=20 >=20 > I would agree that design looks overcomplicated and strange: > If GRO can (and supposed to be) done fully in SW, why do we need to modif= y PMDs at all, > why it can't be just a standalone DPDK library that user can use on his/h= er convenience? > I'd suggest to start with some simple and most widespread case (TCP?) and= try to implement > a library for it first: something similar to what we have for ip reassemb= ly. The reason this should not be a library the application calls is to allow f= or a transparent design for HW and SW support of this feature. Using the SW= version the application should not need to understand (other then performa= nce) that GRO is being done for this port. As I was told the Linux kernel hides this features and make it transparent. > Konstantin =20 Regards, Keith