From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id D7D3F108D for ; Mon, 23 Jan 2017 22:53:13 +0100 (CET) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP; 23 Jan 2017 13:53:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,275,1477983600"; d="scan'208";a="51699542" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga004.jf.intel.com with ESMTP; 23 Jan 2017 13:53:12 -0800 Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 23 Jan 2017 13:53:12 -0800 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.230]) by FMSMSX110.amr.corp.intel.com ([169.254.14.76]) with mapi id 14.03.0248.002; Mon, 23 Jan 2017 13:53:11 -0800 From: "Wiles, Keith" To: Stephen Hemminger CC: "Hu, Jiayu" , "dev@dpdk.org" , "Kinsella, Ray" , "Ananyev, Konstantin" , "Gilmore, Walter E" , "Venkatesan, Venky" , "yuanhan.liu@linux.intel.com" Thread-Topic: [dpdk-dev] [RFC] Add GRO support in DPDK Thread-Index: AQHSdXkcV6LeBFKVAEOr1Y1FPpUEfqFG09oAgABNfYA= Date: Mon, 23 Jan 2017 21:53:11 +0000 Message-ID: <6B5C6BED-CAD4-4C51-8FB7-8509663B813B@intel.com> References: <1485176592-111525-1-git-send-email-jiayu.hu@intel.com> <20170123091550.212dca35@xeon-e3> In-Reply-To: <20170123091550.212dca35@xeon-e3> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.34.81.134] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Jan 2017 21:53:14 -0000 > On Jan 23, 2017, at 10:15 AM, Stephen Hemminger wrote: >=20 > On Mon, 23 Jan 2017 21:03:12 +0800 > Jiayu Hu wrote: >=20 >> With the support of hardware segmentation techniques in DPDK, the >> networking stack overheads of send-side of applications, which directly >> leverage DPDK, have been greatly reduced. But for receive-side, numbers = of >> segmented packets seriously burden the networking stack of applications. >> Generic Receive Offload (GRO) is a widely used method to solve the >> receive-side issue, which gains performance by reducing the amount of >> packets processed by the networking stack. But currently, DPDK doesn't >> support GRO. Therefore, we propose to add GRO support in DPDK, and this >> RFC is used to explain the basic DPDK GRO design. >>=20 >> DPDK GRO is a SW-based packets assembly library, which provides GRO >> abilities for numbers of protocols. In DPDK GRO, packets are merged >> before returning to applications and after receiving from drivers. >>=20 >> In DPDK, GRO is a capability of NIC drivers. That support GRO or not and >> what GRO types are supported are up to NIC drivers. Different drivers ma= y >> support different GRO types. By default, drivers enable all supported GR= O >> types. For applications, they can inquire the supported GRO types by >> each driver, and can control what GRO types are applied. For example, >> ixgbe supports TCP and UDP GRO, but the application just needs TCP GRO. >> The application can disable ixgbe UDP GRO. >>=20 >> To support GRO, a driver should provide a way to tell applications what >> GRO types are supported, and provides a GRO function, which is in charge >> of assembling packets. Since different drivers may support different GRO >> types, their GRO functions may be different. For applications, they don'= t >> need extra operations to enable GRO. But if there are some GRO types tha= t >> are not needed, applications can use an API, like >> rte_eth_gro_disable_protocols, to disable them. Besides, they can >> re-enable the disabled ones. >>=20 >> The GRO function processes numbers of packets at a time. In each >> invocation, what GRO types are applied depends on applications, and the >> amount of packets to merge depends on the networking status and >> applications. Specifically, applications determine the maximum number of >> packets to be processed by the GRO function, but how many packets are >> actually processed depends on if there are available packets to receive. >> For example, the receive-side application asks the GRO function to >> process 64 packets, but the sender only sends 40 packets. At this time, >> the GRO function returns after processing 40 packets. To reassemble the >> given packets, the GRO function performs an "assembly procedure" on each >> packet. We use an example to demonstrate this procedure. Supposing the >> GRO function is going to process packetX, it will do the following two >> things: >> a. Find a L4 assembly function according to the packet type of >> packetX. A L4 assembly function is in charge of merging packets of a >> specific type. For example, TCPv4 assembly function merges packets >> whose L3 IPv4 and L4 is TCP. Each L4 assembly function has a packet >> array, which keeps the packets that are unable to assemble. >> Initially, the packet array is empty; >> b. The L4 assembly function traverses own packet array to find a >> mergeable packet (comparing Ethernet, IP and L4 header fields). If >> finds, merges it and packetX via chaining them together; if doesn't, >> allocates a new array element to store packetX and updates element >> number of the array. >> After performing the assembly procedure to all packets, the GRO function >> combines the results of all packet arrays, and returns these packets to >> applications. >>=20 >> There are lots of ways to implement the above design in DPDK. One of the >> ways is: >> a. Drivers tell applications what GRO types are supported via >> dev->dev_ops->dev_infos_get; >> b. When initialize, drivers register own GRO function as a RX >> callback, which is invoked inside rte_eth_rx_burst. The name of the >> GRO function should be like xxx_gro_receive (e.g. ixgbe_gro_receive). >> Currently, the RX callback can only process the packets returned by >> dev->rx_pkt_burst each time, and the maximum packet number >> dev->rx_pkt_burst returns is determined by each driver, which can't >> be interfered by applications. Therefore, to implement the above GRO >> design, we have to modify current RX implementation to make driver >> return packets as many as possible until the packet number meets the >> demand of applications or there are not available packets to receive. >> This modification is also proposed in patch: >> http://dpdk.org/ml/archives/dev/2017-January/055887.html; >> c. The GRO types to apply and the maximum number of packets to merge >> are passed by resetting RX callback parameters. It can be achieved by >> invoking rte_eth_rx_callback; >> d. Simply, we can just store packet addresses into the packet array. >> To check one element, we need to fetch the packet via its address. >> However, this simple design is not efficient enough. Since whenever >> checking one packet, one pointer dereference is generated. And a >> pointer dereference always causes a cache line miss. A better way is >> to store some rules in each array element. The rules must be the >> prerequisites of merging two packets, like the sequence number of TCP >> packets. We first compare the rules, then retrieve the packet if the >> rules match. If storing the rules causes the packet array structure >> is cache-unfriendly, we can store a fixed-length signature of the >> rules instead. For example, the signature can be calculated by >> performing XOR operation on IP addresses. Both design can avoid >> unnecessary pointer dereferences. >=20 >=20 > Since DPDK does burst mode already, GRO is a lot less relevant. > GRO in Linux was invented because there is no burst mode in the receive A= PI. >=20 > If you look at VPP in FD.io you will see they already do aggregration and > steering at the higher level in the stack. >=20 > The point of GRO is that it is generic, no driver changes are necessary. > Your proposal would add a lot of overhead, and cause drivers to have to > be aware of higher level flows. NACK The design is not super clear to me here and we need to understand the impa= ct to DPDK, performance and the application. I would like to have a clean = transparent design to the application and as little impact on performance a= s possible. Let discuss this as I am not sure my previous concerns were addressed in th= is RFC. Regards, Keith