From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EB41DA2F63 for ; Thu, 3 Oct 2019 23:26:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F20E81C190; Thu, 3 Oct 2019 23:26:02 +0200 (CEST) Received: from sysclose.org (smtp.sysclose.org [69.164.214.230]) by dpdk.org (Postfix) with ESMTP id 5D72B1C13A for ; Thu, 3 Oct 2019 23:26:02 +0200 (CEST) Received: by sysclose.org (Postfix, from userid 5001) id 327926630; Thu, 3 Oct 2019 21:26:23 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 sysclose.org 327926630 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sysclose.org; s=201903; t=1570137983; bh=lRHTgNeKTcSTjsndJxp0vIi1Nm3+c6zzWVd7FCxDSmc=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=In+V5jxeiHy5LNTl3Ql03zag44vEnbGexHa+wM9mPYewYVRzgdxgf1IDlRiMmVDGe ftXhHpKe79mI5ahjv1sjqPhF0r0EPqcYqIMB/w7gy6OGosQeiGsKzSa8fYz+nGATto MYV+sp8bv5zafTP/2ZKr6A2E7hrgP/xVTvUu8Oa73nEeJQvUZ18upwGoBkNujUKKr7 jVZ3iNC7z74ZCznMpNNvNIAgmdP+hkCzbAo9/BtYrrf5kXS3AaiwvyAOHGnOd4E2Y0 Yrfer2YAH70ZA/wxjAY4S8+GMoKRiIS/fTv5vRsAB3dMhdc4MhCiqMlSbtotcSX1MN 5XGtVecejUh7A== X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.sysclose.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=ALL_TRUSTED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU autolearn=ham autolearn_force=no version=3.4.0 Received: from p50.lan (unknown [177.183.215.210]) by sysclose.org (Postfix) with ESMTPSA id 873A465B5; Thu, 3 Oct 2019 21:26:20 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 sysclose.org 873A465B5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sysclose.org; s=201903; t=1570137982; bh=lRHTgNeKTcSTjsndJxp0vIi1Nm3+c6zzWVd7FCxDSmc=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Vq3Q5+F4uNfmlhK+VVAP9qUSg/VY3ZDaS6nxpUnM4F8DRCt89O31YJXhdQkgY1k1b 5cLxJY7pmhYlbb2ZWuaMRZmT1IXtCzbARPqPTasdEQkTn305mhBiVnaZjbthfLXa4r BJpJCbbnbmdbv7XUit2kKZQmgPTgKiwgeH43UcCUg2fwtyf+Bo73zYQsltj+HfogD5 T07mQmbOkDQH7CFrobijz5Z+22vAco3N2Eui4VvAE5S8iT04cBLWBdUHfAa/wjkzZh uz1Zz9seubKazBLkeCSo8kY1ww3NIxtNjYJigRfUGGyyQEUppg/if6V3SwQ9p2IN7z rQjL+pNqv0dXg== Date: Thu, 3 Oct 2019 18:25:52 -0300 From: Flavio Leitner To: Ilya Maximets Cc: Maxime Coquelin , Shahaf Shuler , David Marchand , "dev@dpdk.org" , Tiwei Bie , Zhihong Wang , Obrembski MichalX , Stokes Ian Message-ID: <20191003182552.3f978ef5@p50.lan> In-Reply-To: <088ea83c-cc00-5542-a554-ca857b9ef6ec@ovn.org> References: <20191001221935.12140-1-fbl@sysclose.org> <20191002095831.5927af93@p50.lan> <20191002151528.0f285b8a@p50.lan> <088ea83c-cc00-5542-a554-ca857b9ef6ec@ovn.org> X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear mbufs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, 3 Oct 2019 18:57:32 +0200 Ilya Maximets wrote: > On 02.10.2019 20:15, Flavio Leitner wrote: > > On Wed, 2 Oct 2019 17:50:41 +0000 > > Shahaf Shuler wrote: > > > >> Wednesday, October 2, 2019 3:59 PM, Flavio Leitner: > >>> Obrembski MichalX ; Stokes Ian > >>> > >>> Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear > >>> mbufs > >>> > >>> > >>> Hi Shahaf, > >>> > >>> Thanks for looking into this, see my inline comments. > >>> > >>> On Wed, 2 Oct 2019 09:00:11 +0000 > >>> Shahaf Shuler wrote: > >>> > >>>> Wednesday, October 2, 2019 11:05 AM, David Marchand: > >>>>> Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large > >>>>> linear mbufs > >>>>> > >>>>> Hello Shahaf, > >>>>> > >>>>> On Wed, Oct 2, 2019 at 6:46 AM Shahaf Shuler > >>>>> wrote: > >>>>>> > >> > >> [...] > >> > >>>>> > >>>>> I am missing some piece here. > >>>>> Which pool would the PMD take those external buffers from? > >>>> > >>>> The mbuf is always taken from the single mempool associated w/ > >>>> the rxq. The buffer for the mbuf may be allocated (in case virtio > >>>> payload is bigger than current mbuf size) from DPDK hugepages or > >>>> any other system memory and be attached to the mbuf. > >>>> > >>>> You can see example implementation of it in mlx5 PMD (checkout > >>>> rte_pktmbuf_attach_extbuf call) > >>> > >>> Thanks, I wasn't aware of external buffers. > >>> > >>> I see that attaching external buffers of the correct size would be > >>> more efficient in terms of saving memory/avoiding sparsing. > >>> > >>> However, we still need to be prepared to the worse case scenario > >>> (all packets 64K), so that doesn't help with the total memory > >>> required. > >> > >> Am not sure why. > >> The allocation can be per demand. That is - only when you > >> encounter a large buffer. > >> > >> Having buffer allocated in advance will benefit only from removing > >> the cost of the rte_*malloc. However on such big buffers, and > >> further more w/ device offloads like TSO, am not sure that is an > >> issue. > > > > Now I see what you're saying. I was thinking we had to reserve the > > memory before, like mempool does, then get the buffers as needed. > > > > OK, I can give a try with rte_*malloc and see how it goes. > > This way we actually could have a nice API. For example, by > introducing some new flag RTE_VHOST_USER_NO_CHAINED_MBUFS (there > might be better name) which could be passed to driver_register(). > On receive, depending on this flag, function will create chained > mbufs or allocate new contiguous memory chunk and attach it as > an external buffer if the data could not be stored in a single > mbuf from the registered memory pool. > > Supporting external memory in mbufs will require some additional > work from the OVS side (e.g. better work with ol_flags), but > we'll have to do it anyway for upgrade to DPDK 19.11. Agreed. Looks like rte_malloc is fast enough after all. I have a PoC running iperf3 from VM to another baremetal using vhost-user client with TSO enabled: [...] [ 5] 140.00-141.00 sec 4.60 GBytes 39.5 Gbits/sec 0 1.26 MBytes [ 5] 141.00-142.00 sec 4.65 GBytes 39.9 Gbits/sec 0 1.26 MBytes [ 5] 142.00-143.00 sec 4.65 GBytes 40.0 Gbits/sec 0 1.26 MBytes [ 5] 143.00-144.00 sec 4.65 GBytes 39.9 Gbits/sec 9 1.04 MBytes [ 5] 144.00-145.00 sec 4.59 GBytes 39.4 Gbits/sec 0 1.16 MBytes [ 5] 145.00-146.00 sec 4.58 GBytes 39.3 Gbits/sec 0 1.26 MBytes [ 5] 146.00-147.00 sec 4.48 GBytes 38.5 Gbits/sec 700 973 KBytes [...] (The physical link is 40Gbps) I will clean that, test more and post the patches soon. Thanks! fbl