From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id AC5A71023 for ; Wed, 25 Jan 2017 04:39:56 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP; 24 Jan 2017 19:39:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,282,1477983600"; d="scan'208";a="1086978683" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga001.jf.intel.com with ESMTP; 24 Jan 2017 19:39:49 -0800 Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 24 Jan 2017 19:39:46 -0800 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.230]) by FMSMSX114.amr.corp.intel.com ([10.18.116.8]) with mapi id 14.03.0248.002; Tue, 24 Jan 2017 19:39:45 -0800 From: "Wiles, Keith" To: Stephen Hemminger CC: "Ananyev, Konstantin" , "Hu, Jiayu" , "dev@dpdk.org" , "Kinsella, Ray" , "Gilmore, Walter E" , "Venkatesan, Venky" , "yuanhan.liu@linux.intel.com" Thread-Topic: [dpdk-dev] [RFC] Add GRO support in DPDK Thread-Index: AQHSdXkcV6LeBFKVAEOr1Y1FPpUEfqFG09oAgABNfYCAAEA9AIAAPjGAgABV5ACAAEdgAIAAUwGAgAAGj4CAAA9zgIAAbnMA Date: Wed, 25 Jan 2017 03:39:45 +0000 Message-ID: References: <1485176592-111525-1-git-send-email-jiayu.hu@intel.com> <20170123091550.212dca35@xeon-e3> <6B5C6BED-CAD4-4C51-8FB7-8509663B813B@intel.com> <2601191342CEEE43887BDE71AB9772583F10AD94@irsmsx105.ger.corp.intel.com> <1F520FF1-C38B-483B-95E1-FBD4C631E7D2@intel.com> <2601191342CEEE43887BDE71AB9772583F10AEBD@irsmsx105.ger.corp.intel.com> <6D277342-5212-462F-A507-93B63E86DA90@intel.com> <2601191342CEEE43887BDE71AB9772583F10C6DC@irsmsx105.ger.corp.intel.com> <20170124130425.35fe2a16@xeon-e3> In-Reply-To: <20170124130425.35fe2a16@xeon-e3> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.103.220] Content-Type: text/plain; charset="us-ascii" Content-ID: <0476C3C2318B004EBB21D534F7333225@intel.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Jan 2017 03:39:57 -0000 > On Jan 24, 2017, at 2:04 PM, Stephen Hemminger wrote: >=20 > On Tue, 24 Jan 2017 20:09:07 +0000 > "Wiles, Keith" wrote: >=20 >>> On Jan 24, 2017, at 12:45 PM, Ananyev, Konstantin wrote: >>>=20 >>>=20 >>>=20 >>>> -----Original Message----- >>>> From: Wiles, Keith >>>> Sent: Tuesday, January 24, 2017 2:49 PM >>>> To: Ananyev, Konstantin >>>> Cc: Stephen Hemminger ; Hu, Jiayu ; dev@dpdk.org; Kinsella, Ray >>>> ; Gilmore, Walter E ; Venkatesan, Venky ; >>>> yuanhan.liu@linux.intel.com >>>> Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK >>>>=20 >>>>=20 >>>>> On Jan 24, 2017, at 3:33 AM, Ananyev, Konstantin wrote: >>>>>=20 >>>>>=20 >>>>>=20 >>>>>> -----Original Message----- >>>>>> From: Wiles, Keith >>>>>> Sent: Tuesday, January 24, 2017 5:26 AM >>>>>> To: Ananyev, Konstantin >>>>>> Cc: Stephen Hemminger ; Hu, Jiayu ; dev@dpdk.org; Kinsella, Ray >>>>>> ; Gilmore, Walter E ; Venkatesan, Venky ; >>>>>> yuanhan.liu@linux.intel.com >>>>>> Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK >>>>>>=20 >>>>>>=20 >>>>>>> On Jan 23, 2017, at 6:43 PM, Ananyev, Konstantin wrote: >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>> -----Original Message----- >>>>>>>> From: Wiles, Keith >>>>>>>> Sent: Monday, January 23, 2017 9:53 PM >>>>>>>> To: Stephen Hemminger >>>>>>>> Cc: Hu, Jiayu ; dev@dpdk.org; Kinsella, Ray ; Ananyev, Konstantin >>>>>>>> ; Gilmore, Walter E ; Venkatesan, Venky =20 >>>>>> ; =20 >>>>>>>> yuanhan.liu@linux.intel.com >>>>>>>> Subject: Re: [dpdk-dev] [RFC] Add GRO support in DPDK >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>> On Jan 23, 2017, at 10:15 AM, Stephen Hemminger wrote: >>>>>>>>>=20 >>>>>>>>> On Mon, 23 Jan 2017 21:03:12 +0800 >>>>>>>>> Jiayu Hu wrote: >>>>>>>>>=20 >>>>>>>>>> With the support of hardware segmentation techniques in DPDK, th= e >>>>>>>>>> networking stack overheads of send-side of applications, which d= irectly >>>>>>>>>> leverage DPDK, have been greatly reduced. But for receive-side, = numbers of >>>>>>>>>> segmented packets seriously burden the networking stack of appli= cations. >>>>>>>>>> Generic Receive Offload (GRO) is a widely used method to solve t= he >>>>>>>>>> receive-side issue, which gains performance by reducing the amou= nt of >>>>>>>>>> packets processed by the networking stack. But currently, DPDK d= oesn't >>>>>>>>>> support GRO. Therefore, we propose to add GRO support in DPDK, a= nd this >>>>>>>>>> RFC is used to explain the basic DPDK GRO design. >>>>>>>>>>=20 >>>>>>>>>> DPDK GRO is a SW-based packets assembly library, which provides = GRO >>>>>>>>>> abilities for numbers of protocols. In DPDK GRO, packets are mer= ged >>>>>>>>>> before returning to applications and after receiving from driver= s. >>>>>>>>>>=20 >>>>>>>>>> In DPDK, GRO is a capability of NIC drivers. That support GRO or= not and >>>>>>>>>> what GRO types are supported are up to NIC drivers. Different dr= ivers may >>>>>>>>>> support different GRO types. By default, drivers enable all supp= orted GRO >>>>>>>>>> types. For applications, they can inquire the supported GRO type= s by >>>>>>>>>> each driver, and can control what GRO types are applied. For exa= mple, >>>>>>>>>> ixgbe supports TCP and UDP GRO, but the application just needs T= CP GRO. >>>>>>>>>> The application can disable ixgbe UDP GRO. >>>>>>>>>>=20 >>>>>>>>>> To support GRO, a driver should provide a way to tell applicatio= ns what >>>>>>>>>> GRO types are supported, and provides a GRO function, which is i= n charge >>>>>>>>>> of assembling packets. Since different drivers may support diffe= rent GRO >>>>>>>>>> types, their GRO functions may be different. For applications, t= hey don't >>>>>>>>>> need extra operations to enable GRO. But if there are some GRO t= ypes that >>>>>>>>>> are not needed, applications can use an API, like >>>>>>>>>> rte_eth_gro_disable_protocols, to disable them. Besides, they ca= n >>>>>>>>>> re-enable the disabled ones. >>>>>>>>>>=20 >>>>>>>>>> The GRO function processes numbers of packets at a time. In each >>>>>>>>>> invocation, what GRO types are applied depends on applications, = and the >>>>>>>>>> amount of packets to merge depends on the networking status and >>>>>>>>>> applications. Specifically, applications determine the maximum n= umber of >>>>>>>>>> packets to be processed by the GRO function, but how many packet= s are >>>>>>>>>> actually processed depends on if there are available packets to = receive. >>>>>>>>>> For example, the receive-side application asks the GRO function = to >>>>>>>>>> process 64 packets, but the sender only sends 40 packets. At thi= s time, >>>>>>>>>> the GRO function returns after processing 40 packets. To reassem= ble the >>>>>>>>>> given packets, the GRO function performs an "assembly procedure"= on each >>>>>>>>>> packet. We use an example to demonstrate this procedure. Supposi= ng the >>>>>>>>>> GRO function is going to process packetX, it will do the followi= ng two >>>>>>>>>> things: >>>>>>>>>> a. Find a L4 assembly function according to the packet type of >>>>>>>>>> packetX. A L4 assembly function is in charge of merging packets= of a >>>>>>>>>> specific type. For example, TCPv4 assembly function merges pack= ets >>>>>>>>>> whose L3 IPv4 and L4 is TCP. Each L4 assembly function has a pa= cket >>>>>>>>>> array, which keeps the packets that are unable to assemble. >>>>>>>>>> Initially, the packet array is empty; >>>>>>>>>> b. The L4 assembly function traverses own packet array to find = a >>>>>>>>>> mergeable packet (comparing Ethernet, IP and L4 header fields).= If >>>>>>>>>> finds, merges it and packetX via chaining them together; if doe= sn't, >>>>>>>>>> allocates a new array element to store packetX and updates elem= ent >>>>>>>>>> number of the array. >>>>>>>>>> After performing the assembly procedure to all packets, the GRO = function >>>>>>>>>> combines the results of all packet arrays, and returns these pac= kets to >>>>>>>>>> applications. >>>>>>>>>>=20 >>>>>>>>>> There are lots of ways to implement the above design in DPDK. On= e of the >>>>>>>>>> ways is: >>>>>>>>>> a. Drivers tell applications what GRO types are supported via >>>>>>>>>> dev->dev_ops->dev_infos_get; >>>>>>>>>> b. When initialize, drivers register own GRO function as a RX >>>>>>>>>> callback, which is invoked inside rte_eth_rx_burst. The name of= the >>>>>>>>>> GRO function should be like xxx_gro_receive (e.g. ixgbe_gro_rec= eive). >>>>>>>>>> Currently, the RX callback can only process the packets returne= d by >>>>>>>>>> dev->rx_pkt_burst each time, and the maximum packet number >>>>>>>>>> dev->rx_pkt_burst returns is determined by each driver, which c= an't >>>>>>>>>> be interfered by applications. Therefore, to implement the abov= e GRO >>>>>>>>>> design, we have to modify current RX implementation to make dri= ver >>>>>>>>>> return packets as many as possible until the packet number meet= s the >>>>>>>>>> demand of applications or there are not available packets to re= ceive. >>>>>>>>>> This modification is also proposed in patch: >>>>>>>>>> http://dpdk.org/ml/archives/dev/2017-January/055887.html; >>>>>>>>>> c. The GRO types to apply and the maximum number of packets to = merge >>>>>>>>>> are passed by resetting RX callback parameters. It can be achie= ved by >>>>>>>>>> invoking rte_eth_rx_callback; >>>>>>>>>> d. Simply, we can just store packet addresses into the packet a= rray. >>>>>>>>>> To check one element, we need to fetch the packet via its addre= ss. >>>>>>>>>> However, this simple design is not efficient enough. Since when= ever >>>>>>>>>> checking one packet, one pointer dereference is generated. And = a >>>>>>>>>> pointer dereference always causes a cache line miss. A better w= ay is >>>>>>>>>> to store some rules in each array element. The rules must be th= e >>>>>>>>>> prerequisites of merging two packets, like the sequence number = of TCP >>>>>>>>>> packets. We first compare the rules, then retrieve the packet i= f the >>>>>>>>>> rules match. If storing the rules causes the packet array struc= ture >>>>>>>>>> is cache-unfriendly, we can store a fixed-length signature of t= he >>>>>>>>>> rules instead. For example, the signature can be calculated by >>>>>>>>>> performing XOR operation on IP addresses. Both design can avoid >>>>>>>>>> unnecessary pointer dereferences. =20 >>>>>>>>>=20 >>>>>>>>>=20 >>>>>>>>> Since DPDK does burst mode already, GRO is a lot less relevant. >>>>>>>>> GRO in Linux was invented because there is no burst mode in the r= eceive API. >>>>>>>>>=20 >>>>>>>>> If you look at VPP in FD.io you will see they already do aggregra= tion and >>>>>>>>> steering at the higher level in the stack. >>>>>>>>>=20 >>>>>>>>> The point of GRO is that it is generic, no driver changes are nec= essary. >>>>>>>>> Your proposal would add a lot of overhead, and cause drivers to h= ave to >>>>>>>>> be aware of higher level flows. =20 >>>>>>>>=20 >>>>>>>> NACK >>>>>>>>=20 >>>>>>>> The design is not super clear to me here and we need to understand= the impact to DPDK, performance and the application. I would =20 >>>> like =20 >>>>>> to =20 >>>>>>>> have a clean transparent design to the application and as little i= mpact on performance as possible. >>>>>>>>=20 >>>>>>>> Let discuss this as I am not sure my previous concerns were addres= sed in this RFC. >>>>>>>>=20 >>>>>>>=20 >>>>>>> I would agree that design looks overcomplicated and strange: >>>>>>> If GRO can (and supposed to be) done fully in SW, why do we need to= modify PMDs at all, >>>>>>> why it can't be just a standalone DPDK library that user can use on= his/her convenience? >>>>>>> I'd suggest to start with some simple and most widespread case (TCP= ?) and try to implement >>>>>>> a library for it first: something similar to what we have for ip re= assembly. =20 >>>>>>=20 >>>>>> The reason this should not be a library the application calls is to = allow for a transparent design for HW and SW support of this feature. =20 >>>> Using =20 >>>>>> the SW version the application should not need to understand (other = then performance) that GRO is being done for this port. >>>>>>=20 >>>>>=20 >>>>> Why is that? >>>>> Let say we have ip reassembly library that is called explicitly by th= e application. >>>>> I think for L4 grouping we can do the same. >>>>> After all it is a pure SW feature, so to me it makes sense to allow a= pplication to decide >>>>> when/where to call it. >>>>> Again it would allow people to develop/use it without any modificatio= ns in current PMDs. =20 >>>>=20 >>>> I guess I did not make it clear, we need to support HW and this SW ver= sion transparently just as we handle other features in HW/SW under a >>>> generic API for DPDK. =20 >>>=20 >>> Ok, I probably wasn't very clear too. >>> What I meant: >>> Let's try to implement GRO (in SW) as a standalone DPDK library, >>> with clean & simple interface and see how fast and useful it would be. >>> We can refer to it as step 1. >>> When (if) we'll have step 1 in place, then we can start thinking >>> about adding combined HW/SW solution for it (step 2). >>> I think at that stage it would be much clearer: >>> is there any point in it at all, >>> and if yes, how it should be done: >>> -changes at rte_ethedev or on PMD layers or both >>> - would changes at rte_ethdev API be needed and if yes what particular,= etc. >>>=20 >>> From my perspective, without step 1 in place, there is no much point i= n approaching step 2. =20 >>=20 >> Currently I believe they have a SW library version of the code, but I th= ink we need to look at the design in that form. At this time the current de= sign or code is not what I would expect needs to be done for the transparen= t version. To many interactions with the application and a separate Rx/Tx f= unctions were being used (If I remember correctly) >>=20 >>>=20 >>> BTW, any particular HW you have in mind? >>> Currently, as I can see LRO (HW) is supported only by ixgbe and probabl= y by viritual PMDs (virtio/vmxent3). >>> Though even for ixgbe there are plenty of limitations: SRIOV mode shoul= d be off, HW CRC stropping should be off, etc. >>> So my guess, right now step 1 is much more useful and feasible. >>>=20 >>>>=20 >>>>>=20 >>>>>> As I was told the Linux kernel hides this features and make it trans= parent. =20 >>>>>=20 >>>>> Yes, but DPDK does a lot things in a different way. >>>>> So it doesn't look like a compelling reason for me :) =20 >>>>=20 >>>> Just looking at different options here and it is a compelling reason t= o me as it enforces the design can be transparent to the application. >>>> Having the application in a NFV deciding on hw or sw or both is not a = good place to put that logic IMO. =20 >>>=20 >>> Actually could you provide an example of linux NIC driver, that uses HW= offloads (and which) to implement GRO? >>> I presume some might use HW generated hashes, but apart from that, when= HW performs actual packet grouping? >>> From what I've seen Intel ones rely SW implementation for that. >>> But I am not a linux/GRO expert, so feel free to correct me here. >>> Konstantin =20 >>=20 >> Regards, >> Keith >>=20 >=20 > Linux uses a push (rather than DPDK pull) model for packet receiving. > The Linux driver pushes packets into GRO by calling napi_gro_receive. >=20 > Since DPDK is pull model the API would be simpler. > it could be as simple as: > nb =3D rte_eth_rx_burst(port, rx_pkts, N); > nb =3D rte_rx_gro(port, rx_pkts, gro_pkts, nb); >=20 > I agree with others, look at ip reassembly library as example. > Also, GRO does not make sense for applications which already do the same = vector flow > processing like VPP which is one reason it should be optional. I agree it should be option, but I worry about making it an example. I woul= d like to see the GRO to be more transparent to the application and support= ed as a generic feature for DPDK. Maybe the application needs to request th= e support or it is a config option. The problem with config options is they= are hard to test and testing becomes complexed. Can we not figure out a way to add the feature inline instead of the applic= ation needing to call these APIs? It would be nice to have IP fragmentation= also a optional feature to the rx/tx ethdev call. It would take it out of = the example zone and move it into DPDK as a real feature. Today we expect t= he application to chain all of these little bits outside of DPDK into somet= hing useful, can we help fix that problem? >=20 Regards, Keith