From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mo1.mail-out.ovh.net (8.mo1.mail-out.ovh.net [178.33.110.239]) by dpdk.org (Postfix) with ESMTP id C4BA29DE for ; Fri, 6 Dec 2013 08:53:47 +0100 (CET) Received: from mail406.ha.ovh.net (b9.ovh.net [213.186.33.59]) by mo1.mail-out.ovh.net (Postfix) with SMTP id F0E01FFA5F7 for ; Fri, 6 Dec 2013 08:57:49 +0100 (CET) Received: from b0.ovh.net (HELO queueout) (213.186.33.50) by b0.ovh.net with SMTP; 6 Dec 2013 09:58:58 +0200 Received: from lneuilly-152-23-9-75.w193-252.abo.wanadoo.fr (HELO pcdeff) (ff@ozog.com@193.252.40.75) by ns0.ovh.net with SMTP; 6 Dec 2013 09:58:57 +0200 From: =?utf-8?Q?Fran=C3=A7ois-Fr=C3=A9d=C3=A9ric_Ozog?= To: "'Prashant Upadhyaya'" , =?utf-8?B?J+WQtOS6muS4nCc=?= , "'Thomas Monjalon'" References: <201312051721.51070.thomas.monjalon@6wind.com> In-Reply-To: Date: Fri, 6 Dec 2013 08:53:21 +0100 Message-ID: <012001cef258$3c77a180$b566e480$@com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Ac7yKSi1vOYr7FMZSMySEzKL3sAj1QADkdagAAfev9A= Content-Language: fr X-Ovh-Tracer-Id: 9493869490749954265 X-Ovh-Remote: 193.252.40.75 (lneuilly-152-23-9-75.w193-252.abo.wanadoo.fr) X-Ovh-Local: 213.186.33.20 (ns0.ovh.net) X-OVH-SPAMSTATE: OK X-OVH-SPAMSCORE: -100 X-OVH-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeeiledrkeejucetufdoteggodetrfcurfhrohhfihhlvgemucfqggfjnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd X-Spam-Check: DONE|U 0.5/N X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -100 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrfeeiledrkeejucetufdoteggodetrfcurfhrohhfihhlvgemucfqggfjnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd Cc: dev@dpdk.org Subject: Re: [dpdk-dev] generic load balancing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Dec 2013 07:53:47 -0000 Can we (as a community) be leading the way for the NIC vendors? I mean, a few years ago I had the discussion with Chelsio to solve MPLS = and GTP load balancing. They were happy to integrate the "requirements" in the roadmap.... So could we build a list of such "requirements" and publish it? NIC = vendors are looking ways to differentiate from one another, so I assume = this may help us get what we want. In addition to the NIC requirements we may polish an API to control = those features in a standard way from DPDK. Fran=C3=A7ois-Fr=C3=A9d=C3=A9ric > -----Message d'origine----- > De : dev [mailto:dev-bounces@dpdk.org] De la part de Prashant = Upadhyaya > Envoy=C3=A9 : vendredi 6 d=C3=A9cembre 2013 05:04 > =C3=80 : =E5=90=B4=E4=BA=9A=E4=B8=9C; Thomas Monjalon > Cc : dev@dpdk.org > Objet : Re: [dpdk-dev] generic load balancing >=20 > Hi, >=20 > Regarding this point =E2=80=93 >=20 > If intel supports round robin distribution of packets in the same = flow, > Intel needs to provide some way like Cavium's SSO(tag switch) to = maintain > packet order in the same flow. And it is hard to do so because intel's = cpu > and nic are decoupled >=20 > My main submission is =E2=80=93 I understand there are issues like the = above and > ooo stuff you pointed out. > But that is for the usecase implementer to solve in software logic. = The > equivalent of tag switch can be attempted to be developed in the = software > if the usecase so desires. > But atleast =E2=80=98give=E2=80=99 the facility in the NIC to fan out = on round robin on > queues. > Somehow we are trying to find out reasons why we should not have it. > I am saying, give it in the NIC and let people use it in innovative = ways. > People who don=E2=80=99t want to use it can always have the choice to = not use it. >=20 > Regards > -Prashant >=20 >=20 > From: =E5=90=B4=E4=BA=9A=E4=B8=9C [mailto:ydwoo0722@gmail.com] > Sent: Friday, December 06, 2013 7:47 AM > To: Thomas Monjalon > Cc: Michael Quicquaro; Prashant Upadhyaya; dev@dpdk.org > Subject: Re: [dpdk-dev] generic load balancing >=20 > RSS is a way to distribute packets to multi cores while packets order = in > the same flow still get maintained. >=20 > Round robin distribution of packets may cause ooo(out of order) of = packets > in the same flow. > We also meet this problem in ipsec vpn case. > The tunneled packets are rss to the same queue if they are on the same > tunnel. > But if we dispatch the packets to the other cores to process, ooo = packets > may occur and tcp performance may be greatly hurt. >=20 > If you enable rss on udp packets and some udp packets are ip = fragmented, > rss of udp fragments(hash only calculated from ip addr) may be = different > fom rss of udp non-fragment packets(hash with information of udp = ports), > ooo may occur too. > So in kernel driver disables udp rss by default. >=20 > If intel supports round robin distribution of packets in the same = flow, > Intel needs to provide some way like Cavium's SSO(tag switch) to = maintain > packet order in the same flow. And it is hard to do so because intel's = cpu > and nic are decoupled. >=20 >=20 >=20 >=20 > 2013/12/6 Thomas Monjalon > > > Hello, >=20 > 05/12/2013 16:42, Michael Quicquaro : > > This is a good discussion and I hope Intel can see and benefit from = it. > Don't forget that this project is Open Source. > So you can submit your patches for review. >=20 > Thanks for participating > -- > Thomas >=20 >=20 >=20 >=20 >=20 > = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > =3D=3D=3D=3D > Please refer to http://www.aricent.com/legal/email_disclaimer.html > for important disclosures regarding this electronic communication. > = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= > =3D=3D=3D=3D