From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CAB30A2EDB for ; Wed, 2 Oct 2019 19:50:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2CD951BF3D; Wed, 2 Oct 2019 19:50:44 +0200 (CEST) Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-eopbgr130042.outbound.protection.outlook.com [40.107.13.42]) by dpdk.org (Postfix) with ESMTP id 033A09E4 for ; Wed, 2 Oct 2019 19:50:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j64mdLzhzeQG0rF/4sziku1ISEVcYG8GLv92UIAoB3V3Pi3psKRmj4eUX5NU7j1ABAhNALl86gWgNS6L8fF8X5rUMRelr99x33aYIp+pVfYPHunHdH7StrTmwHij7xRK0BDKQunoUG8bJqKMD0e35mCsFqLHT0oiPFItmh3UaLBvJIwGfpRLm3aw1Ev9NtfHK0LlWOjqToKbrDfnFrhRiH9U02jC+INx1sheS9kGqaM4JsCcT6WozgISkh1j5LtCQadSbMVumbAJqgemJYai5HRF6eqjhcTAcXomqruZl8nol95LwOvZwWPnzRDs3e2dmFMI8ZFl0JF5xplcQcHwWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XHUufbbtqHTLX8JZYFLmXeF15EoRiv+zD2S3HiCCFIQ=; b=Y4VXmExgCekPYC/Hm9D1tlyb2Ln+xL4fSw+V7URTBraMNAtlXCVKttQB+8WLhW6hrWq6DqCZk/N99kHkNfRoGPpWxY2fHNXZTbZaFyEhCiJaYOTnpuz+nbBNPYt+36mH3oO1DWYD6frTzERA36RD+H8mJAe9w62x58lc6oNKjE/pV/RMfOfH0cfT6WEYeaRGkyfaE9BAtDecky2aj++9xOnbz61UjlLIVAINJh38NvLtFEUybi6t0D7d6LHK32eyqb2Kbywia8i9mkwpdGaLjjAh5jT+DgDnVbNOvcAoNDc4VLre8vBsDVayPy5GWKTfIRyzOcKAyYvsVz6PS/lJ5g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XHUufbbtqHTLX8JZYFLmXeF15EoRiv+zD2S3HiCCFIQ=; b=DXLqUN/aYP6vDZnFLpl7UoqPSGQ7H+rO6dp3KJXDpVFoYynWeRV76vJVJZaJPRJgdfEeK0wcvBw981c62p9fC0yCzywFtVi6bL3pYOeQGB1L8GNIDadaYURAjpzTBdAoF0Cy5CTk4sGfg4T8OY9t4sDz7u7ENTEELlJWPsC3daE= Received: from AM0PR0502MB3795.eurprd05.prod.outlook.com (52.133.45.150) by AM0PR0502MB4084.eurprd05.prod.outlook.com (52.133.33.158) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2305.20; Wed, 2 Oct 2019 17:50:41 +0000 Received: from AM0PR0502MB3795.eurprd05.prod.outlook.com ([fe80::683e:2a0f:9e1:e50c]) by AM0PR0502MB3795.eurprd05.prod.outlook.com ([fe80::683e:2a0f:9e1:e50c%7]) with mapi id 15.20.2305.017; Wed, 2 Oct 2019 17:50:41 +0000 From: Shahaf Shuler To: Flavio Leitner CC: David Marchand , "dev@dpdk.org" , Maxime Coquelin , Tiwei Bie , Zhihong Wang , Obrembski MichalX , Stokes Ian Thread-Topic: [dpdk-dev] [PATCH] vhost: add support to large linear mbufs Thread-Index: AQHVeKZifaPcR3gO1EG3W76NqR4rsKdGxLYAgAA6EgCAAA2hYIAARHiAgABPVOA= Date: Wed, 2 Oct 2019 17:50:41 +0000 Message-ID: References: <20191001221935.12140-1-fbl@sysclose.org> <20191002095831.5927af93@p50.lan> In-Reply-To: <20191002095831.5927af93@p50.lan> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-originating-ip: [31.154.10.105] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 65b5ef13-a136-4f43-4b8d-08d747610aa4 x-ms-office365-filtering-ht: Tenant x-ms-traffictypediagnostic: AM0PR0502MB4084: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8273; x-forefront-prvs: 0178184651 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(376002)(39860400002)(346002)(136003)(396003)(199004)(189003)(8676002)(66066001)(478600001)(5024004)(25786009)(14444005)(256004)(6436002)(9686003)(55016002)(229853002)(6246003)(14454004)(4326008)(2906002)(7696005)(76176011)(561944003)(8936002)(33656002)(99286004)(71200400001)(71190400001)(81166006)(81156014)(486006)(446003)(11346002)(54906003)(316002)(6116002)(3846002)(186003)(6506007)(53546011)(7736002)(102836004)(26005)(86362001)(5660300002)(74316002)(66946007)(76116006)(476003)(6916009)(66556008)(52536014)(64756008)(305945005)(66476007)(66446008); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR0502MB4084; H:AM0PR0502MB3795.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 6oSK7xm1rBWSPW4UdzKzQfjz3Lja+h5Pxa7Weyd74VcNPBN4qlYJsZFR8ddjK5KreZHWAsI+1u9K+Zy10NXStYKyxMW25l9dfYFcNwAXFu0X9FXCDevrKiW8nNKFwOeb+w7l65iHWeo2o0B85sKI6d/EnQer3SPhld+4sT2qn9JyCoCDzvw8VuDycB19byQzBl8upRFjofrdNIRAx6YTncOY7icfTLqwNnbzZeenY8NnabZNKmKU9JJ7499ClOqxbVaYEqiOpMvysCkfUrAn0Jl8a+/YE7jsFsRsYCuUL3cMZXQqfv9qcsh4L4306IyRDB20aJq8pyMyQ+qKGJhQXlGF206c9ajwah8Br86KgK9lGHpEU0n6PhsTbswItIvrl6Qv2t5hHjtYSl3+5rbQ5gtLfhlggdDX/gNseZnPJJU= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 65b5ef13-a136-4f43-4b8d-08d747610aa4 X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Oct 2019 17:50:41.1064 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: WCzkyrI3kYAGc5c5AAOtD48NmbNBvH3x/TadXbezS9Es4pjZ99ce6EBZyxuGXUYKo8y2u7JY56wYJGX82cGqIQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0502MB4084 Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear mbufs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Wednesday, October 2, 2019 3:59 PM, Flavio Leitner: > Obrembski MichalX ; Stokes Ian > > Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear mbufs >=20 >=20 > Hi Shahaf, >=20 > Thanks for looking into this, see my inline comments. >=20 > On Wed, 2 Oct 2019 09:00:11 +0000 > Shahaf Shuler wrote: >=20 > > Wednesday, October 2, 2019 11:05 AM, David Marchand: > > > Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear > > > mbufs > > > > > > Hello Shahaf, > > > > > > On Wed, Oct 2, 2019 at 6:46 AM Shahaf Shuler > > > wrote: > > > > [...] > > > > > > I am missing some piece here. > > > Which pool would the PMD take those external buffers from? > > > > The mbuf is always taken from the single mempool associated w/ the > > rxq. The buffer for the mbuf may be allocated (in case virtio payload > > is bigger than current mbuf size) from DPDK hugepages or any other > > system memory and be attached to the mbuf. > > > > You can see example implementation of it in mlx5 PMD (checkout > > rte_pktmbuf_attach_extbuf call) >=20 > Thanks, I wasn't aware of external buffers. >=20 > I see that attaching external buffers of the correct size would be more > efficient in terms of saving memory/avoiding sparsing. >=20 > However, we still need to be prepared to the worse case scenario (all > packets 64K), so that doesn't help with the total memory required. Am not sure why.=20 The allocation can be per demand. That is - only when you encounter a large= buffer.=20 Having buffer allocated in advance will benefit only from removing the cost= of the rte_*malloc. However on such big buffers, and further more w/ devic= e offloads like TSO, am not sure that is an issue.=20 >=20 > The current patch pushes the decision to the application which knows bett= er > the workload. If more memory is available, it can optionally use large bu= ffers, > otherwise just don't pass that. Or even decide whether to share the same > 64K mempool between multiple vhost ports or use one mempool per port. >=20 > Perhaps I missed something, but managing memory with mempool still > require us to have buffers of 64K regardless if the data consumes less sp= ace. > Otherwise the application or the PMD will have to manage memory itself. >=20 > If we let the PMD manages the memory, what happens if a port/queue is > closed and one or more buffers are still in use (switching)? I don't see = how to > solve this cleanly. Closing of the dev should return EBUSY till all buffers are free.=20 What is the use case of closing a port while still having packet pending on= other port of the switch? And why we cannot wait for them to complete tran= smission?=20 >=20 > fbl >=20 > > > > > > > > If it is from an additional mempool passed to the vhost pmd, I can't > > > see the difference with Flavio proposal. > > > > > > > > > > The pros of this approach is that you have full flexibility on the > > > > memory > > > allocation, and therefore a lower footprint. > > > > The cons is the OVS will need to know how to handle mbuf w/ > > > > external > > > buffers (not too complex IMO). > > > > > > > > > -- > > > David Marchand