From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id A7DCC7F51 for ; Thu, 30 Oct 2014 00:28:32 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP; 29 Oct 2014 16:31:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="408159843" Received: from pgsmsx103.gar.corp.intel.com ([10.221.44.82]) by FMSMGA003.fm.intel.com with ESMTP; 29 Oct 2014 16:29:18 -0700 Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by PGSMSX103.gar.corp.intel.com (10.221.44.82) with Microsoft SMTP Server (TLS) id 14.3.195.1; Thu, 30 Oct 2014 07:37:23 +0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.202]) by SHSMSX103.ccr.corp.intel.com ([169.254.4.207]) with mapi id 14.03.0195.001; Thu, 30 Oct 2014 07:37:22 +0800 From: "Xie, Huawei" To: "Ouyang, Changchun" , Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH] vhost: Check descriptor number for vector Rx Thread-Index: AQHP72XgqLzVtIlfu0ypahTULjdsBZw+dJCAgAF2x4CAAzSwgIAA1GTggAPOjMA= Date: Wed, 29 Oct 2014 23:37:21 +0000 Message-ID: References: <1414139898-26562-1-git-send-email-changchun.ouyang@intel.com> <1684003.hLxV2SOth0@xps13> <1433643.g0MTQjq1Jf@xps13> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH] vhost: Check descriptor number for vector Rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Oct 2014 23:28:34 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ouyang, Changchun > Sent: Monday, October 27, 2014 6:56 AM > To: Thomas Monjalon > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] vhost: Check descriptor number for vector= Rx >=20 > Hi Thomas, >=20 > > -----Original Message----- > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > Sent: Monday, October 27, 2014 4:46 PM > > To: Ouyang, Changchun > > Cc: dev@dpdk.org > > Subject: Re: [dpdk-dev] [PATCH] vhost: Check descriptor number for vect= or > > Rx > > > > 2014-10-25 00:48, Ouyang, Changchun: > > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > > > 2014-10-24 16:38, Ouyang Changchun: > > > > > For zero copy, it need check whether RX descriptor num meets the > > > > > least requirement when using vector PMD Rx function, and give use= r > > > > > more hints if it fails to meet the least requirement. > > > > [...] > > > > > --- a/examples/vhost/main.c > > > > > +++ b/examples/vhost/main.c > > > > > @@ -131,6 +131,10 @@ > > > > > #define RTE_TEST_RX_DESC_DEFAULT_ZCP 32 /* legacy: 32, DPDK vi= rt > > FE: 128. */ > > > > > #define RTE_TEST_TX_DESC_DEFAULT_ZCP 64 /* legacy: 64, DPDK vi= rt > > FE: 64. */ > > > > > > > > > > +#ifdef RTE_IXGBE_INC_VECTOR > > > > > +#define VPMD_RX_BURST 32 > > > > > +#endif > > > > > + > > > > > /* Get first 4 bytes in mbuf headroom. */ #define > > > > > MBUF_HEADROOM_UINT32(mbuf) (*(uint32_t *)((uint8_t *)(mbuf) \ > > > > > + sizeof(struct rte_mbuf))) > > > > > @@ -792,6 +796,19 @@ us_vhost_parse_args(int argc, char **argv) > > > > > return -1; > > > > > } > > > > > > > > > > +#ifdef RTE_IXGBE_INC_VECTOR > > > > > + if ((zero_copy =3D=3D 1) && (num_rx_descriptor <=3D > VPMD_RX_BURST)) { > > > > > + RTE_LOG(INFO, VHOST_PORT, > > > > > + "The RX desc num: %d is too small for PMD to > > work\n" > > > > > + "properly, please enlarge it to bigger than %d > if\n" > > > > > + "possible by the option: '--rx-desc-num > > '\n" > > > > > + "One alternative is disabling > > RTE_IXGBE_INC_VECTOR\n" > > > > > + "in config file and rebuild the libraries.\n", > > > > > + num_rx_descriptor, VPMD_RX_BURST); > > > > > + return -1; > > > > > + } > > > > > +#endif > > > > > + > > > > > return 0; > > > > > } > > > > > > > > I feel there is a design problem here. > > > > An application shouldn't have to care about the underlying driver. > > > > > > For most of other applications, as their descriptor numbers are set a= s > > > big enough(1024 or so) , So there is no need to check the descriptor > > number at the early stage of running. > > > > > > But for vhost zero copy(note vhost one copy also has 1024 descriptor > > > number) has the default descriptor number of 32. > > > Why use 32? > > > because vhost zero copy implementation (working as backend) need > > > support dpdk based app which use pmd virtio-net driver, And also need > > support linux legacy virtio-net based application. > > > When it is the linux legacy virtio-net case, on one side the qemu has > > > hard code to confine the total virtio descriptor size to 256, On othe= r side, > > legacy virtio use half of them as virtio header, and then only another = half i.e. > > 128 descriptors are available to use as real buffer. > > > > > > In PMD mode, all HW descriptors need to be filled DMA address in the = rx > > initial stage, otherwise there is probably exceptional in rx process. > > > Based on that, we need use really limited virtio buffer to fully fill > > > all hw descriptor DMA address, Or in other word, the available virtio > > > descriptor size will determine the total mbuf size and hw descriptor > > > size in the case of zero copy, > > > > > > Tune and find that 32 is the suitable value for vhost zero copy to wo= rk > > properly when it legacy linux virtio case. > > > Another factor to reduce the value to 32, is that mempool use ring to > > > accommodate the mbuf, it cost one to flag the ring head/tail, And the= re are > > some other overheads like temporary mbufs(size as RX_BURST) when rx. > > > Note that number descriptor should need power 2. > > > > > > Why the change occur at this moment? > > > Recently the default rx function is modified into vector RX function, > > > while it use non-vector mode (scalar mode) Rx previously, Vector RX > > function need more than 32 descriptor to work properly, but scalar mod= e RX > > hasn't this limitation. > > > > > > As the RX function is changeable(you can use vector mode or non-vecto= r), > > and descriptor number can also be changed. > > > So here in the vhost app, check if they match to make sure all things= could > > work normally, and give some hints if they don't match. > > > > > > Hope the above could make it a bit clearer. :-) > > > > Thank you for your explanation. > > Your fix shows that driver and application are tightly linked. > > It's a design issue. As I said: > > "An application shouldn't have to care about the underlying driver." > > I didn't dig enough in vhost to suggest a good fix but I'm sure someone= could > > have an idea. > > > Agree with you, there is something linked between app and driver, but tha= t's due > to a few things: > 1.Qume has hard code to confine the total vring size; > 2.PMD driver need fully fill the dma address of descriptor at setup stage= ; Can PMD driver fill its desc ring until all has been filled or there is no= mbuf in pool? > 3.PMD driver use vector PMD as default path which require more than 32 in= the > burst RX; > 4.Zero copy need use external buffer directly and set to HW descriptor dm= a > address; >=20 > Except for item 3, everything could not be removed or ignored easily. > As for item 3, I don't know why we set vector PMD as default path, while > perusing performance, but at the cost of flexibility, > Here is an example, vhost zero copy need check it and descriptor number. = I am > not sure if the default vector pmd path will affect other app or not. >=20 > On the other hand, without this check, and hint to user, > User will suffer from this, vhost can't receive packets any more after fo= rwarding > 31 packets or so, > and user don't know what really happen behind, also can't easily get the > information of > their descriptor is not enough for vector pmd RX. > Then this is followed by painfully debugging and figuring out the issue. >=20 > While with this check and hint to user, things go smoothly. >=20 > Meanwhile we can waiting for other people's viewpoints. >=20 > Thanks > Changchun