From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22448A2EDB for ; Wed, 2 Oct 2019 20:15:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 46ED31BF39; Wed, 2 Oct 2019 20:15:39 +0200 (CEST) Received: from sysclose.org (smtp.sysclose.org [69.164.214.230]) by dpdk.org (Postfix) with ESMTP id BCDF71BF30 for ; Wed, 2 Oct 2019 20:15:37 +0200 (CEST) Received: by sysclose.org (Postfix, from userid 5001) id 3186B66FD; Wed, 2 Oct 2019 18:15:58 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 sysclose.org 3186B66FD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sysclose.org; s=201903; t=1570040158; bh=wPczgkIpR9E5Q8cEIhE/Z0kGMUfszU/oHNYsHdT0op0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=eCTlZWQkepFmMoofqyuUTAwhNe+DGWLLZ40KHUw4gpxPI45HXHN0JYGAtVtUnkIxE ivFY/wC6PB/SSYipbtMbhKhml7eWE17rH8n0WoW5hX6vixiS2fRgTXXKk1GwDCZJ2f zHQ61QXeer4EK60cWviatCbP26311mtZQ+3KjQttu2sqMQywHYZ6kiVK2Ho5DsEVc2 cK3wV0JrF5mvH252vKU0tnVRAN3pIOoqaha9nlM7uIE3tsBXziKfYa6J2UaPYSWyIv tD9a15TJ6Jj3qZE2nfo4UJ8cSBTd5zJLhZCw0m+nz838OJP8wUhbHp3kvK68LspcO3 egP39i9SQBGLQ== X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.sysclose.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=5.0 tests=ALL_TRUSTED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU autolearn=ham autolearn_force=no version=3.4.0 Received: from p50.lan (unknown [177.183.215.210]) by sysclose.org (Postfix) with ESMTPSA id E126B13A8; Wed, 2 Oct 2019 18:15:55 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 sysclose.org E126B13A8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sysclose.org; s=201903; t=1570040157; bh=wPczgkIpR9E5Q8cEIhE/Z0kGMUfszU/oHNYsHdT0op0=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=Vhr0+sAnR6sb6t4wfysSk2VUQBlk9isjREDfEWZcow0px6MWOngn+VScd95fQBPnE BRsTwYqI1mVFgFzNvhVsupAQbXlaS38ecs/UkLSLo85OfSiUevDY+uvbZ9pla1dShK aMTQel/N/TbZyDBvWVIxr/F3KF5qDhyDSAnbRVBKLExuhT/IUpNHJJ7DNvCAKMEJ9s PSOhoY8nj4KCf7/jF5FnUN8SKSHzLS5Y17dG6E69DP2mTXsD4LHr8qnu2hJ0vpzjbS JfXqONaRQ2lbJu/rDoUPUvn5KD4NQfxYcKoc10o2Im4Qby8+4Hb7zNVmIg+daOVB1Z oiDB/JKnNzQkA== Date: Wed, 2 Oct 2019 15:15:28 -0300 From: Flavio Leitner To: Shahaf Shuler Cc: David Marchand , "dev@dpdk.org" , Maxime Coquelin , Tiwei Bie , Zhihong Wang , Obrembski MichalX , Stokes Ian Message-ID: <20191002151528.0f285b8a@p50.lan> In-Reply-To: References: <20191001221935.12140-1-fbl@sysclose.org> <20191002095831.5927af93@p50.lan> X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear mbufs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, 2 Oct 2019 17:50:41 +0000 Shahaf Shuler wrote: > Wednesday, October 2, 2019 3:59 PM, Flavio Leitner: > > Obrembski MichalX ; Stokes Ian > > > > Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear > > mbufs > > > > > > Hi Shahaf, > > > > Thanks for looking into this, see my inline comments. > > > > On Wed, 2 Oct 2019 09:00:11 +0000 > > Shahaf Shuler wrote: > > > > > Wednesday, October 2, 2019 11:05 AM, David Marchand: > > > > Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large > > > > linear mbufs > > > > > > > > Hello Shahaf, > > > > > > > > On Wed, Oct 2, 2019 at 6:46 AM Shahaf Shuler > > > > wrote: > > > > > > > [...] > > > > > > > > > I am missing some piece here. > > > > Which pool would the PMD take those external buffers from? > > > > > > The mbuf is always taken from the single mempool associated w/ the > > > rxq. The buffer for the mbuf may be allocated (in case virtio > > > payload is bigger than current mbuf size) from DPDK hugepages or > > > any other system memory and be attached to the mbuf. > > > > > > You can see example implementation of it in mlx5 PMD (checkout > > > rte_pktmbuf_attach_extbuf call) > > > > Thanks, I wasn't aware of external buffers. > > > > I see that attaching external buffers of the correct size would be > > more efficient in terms of saving memory/avoiding sparsing. > > > > However, we still need to be prepared to the worse case scenario > > (all packets 64K), so that doesn't help with the total memory > > required. > > Am not sure why. > The allocation can be per demand. That is - only when you encounter a > large buffer. > > Having buffer allocated in advance will benefit only from removing > the cost of the rte_*malloc. However on such big buffers, and further > more w/ device offloads like TSO, am not sure that is an issue. Now I see what you're saying. I was thinking we had to reserve the memory before, like mempool does, then get the buffers as needed. OK, I can give a try with rte_*malloc and see how it goes. > > The current patch pushes the decision to the application which > > knows better the workload. If more memory is available, it can > > optionally use large buffers, otherwise just don't pass that. Or > > even decide whether to share the same 64K mempool between multiple > > vhost ports or use one mempool per port. > > > > Perhaps I missed something, but managing memory with mempool still > > require us to have buffers of 64K regardless if the data consumes > > less space. Otherwise the application or the PMD will have to > > manage memory itself. > > > > If we let the PMD manages the memory, what happens if a port/queue > > is closed and one or more buffers are still in use (switching)? I > > don't see how to solve this cleanly. > > Closing of the dev should return EBUSY till all buffers are free. > What is the use case of closing a port while still having packet > pending on other port of the switch? And why we cannot wait for them > to complete transmission? The vswitch gets the request from outside and the assumption is that the command will succeed. AFAIK, there is no retry mechanism. Thanks Shahaf! fbl