From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40077.outbound.protection.outlook.com [40.107.4.77]) by dpdk.org (Postfix) with ESMTP id C28161C31A for ; Thu, 12 Apr 2018 20:58:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=z0quiOhyZ2Al52WAXVFyEgPn2quoR9JmiN3WtfyrAc8=; b=FPBx2ruOLWUMZMKPYrAipmPx3urZXiY7TYtNgVMoul1S4R3DMX6QZPx2f/SYcMbw84K1N16V36zHnsYQ2JnQQjbIazOHtGa37GxroAANQMz+TI/6uXivyqkZJUO12S9OteuDWPAe9CxdvX5A9ShIZFmqjW1hQpv80OWXeswtyYg= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; Received: from yongseok-MBP.local (209.116.155.178) by DB6PR0501MB2038.eurprd05.prod.outlook.com (2603:10a6:4:6::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.675.11; Thu, 12 Apr 2018 18:58:35 +0000 Date: Thu, 12 Apr 2018 11:58:21 -0700 From: Yongseok Koh To: "Ananyev, Konstantin" Cc: Olivier Matz , "Lu, Wenzhuo" , "Wu, Jingjing" , Adrien Mazarguil , =?iso-8859-1?Q?N=E9lio?= Laranjeiro , "dev@dpdk.org" Message-ID: <20180412185710.GA33800@yongseok-MBP.local> References: <20180402185008.13073-2-yskoh@mellanox.com> <20180403082615.etnr33cuyey7i3u3@platinum> <20180404001205.GB1867@yongseok-MBP.local> <20180409160434.kmw4iyztemrkzmtc@platinum> <20180410015902.GA20627@yongseok-MBP.local> <2601191342CEEE43887BDE71AB977258AE91344A@IRSMSX102.ger.corp.intel.com> <20180411053302.GA26252@yongseok-MBP.local> <2601191342CEEE43887BDE71AB977258AE913944@IRSMSX102.ger.corp.intel.com> <20180411170810.GA27791@yongseok-MBP.local> <2601191342CEEE43887BDE71AB977258AE914692@IRSMSX102.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2601191342CEEE43887BDE71AB977258AE914692@IRSMSX102.ger.corp.intel.com> User-Agent: Mutt/1.9.3 (2018-01-21) X-Originating-IP: [209.116.155.178] X-ClientProxiedBy: SN4PR0201CA0021.namprd02.prod.outlook.com (2603:10b6:803:2b::31) To DB6PR0501MB2038.eurprd05.prod.outlook.com (2603:10a6:4:6::20) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(5600026)(4534165)(4627221)(201703031133081)(201702281549075)(48565401081)(2017052603328)(7153060)(7193020); SRVR:DB6PR0501MB2038; X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2038; 3:VxPcsm94IVg+POgllh2f2xonN5VkM/WDStyNbbWFQRC/l0kgxBPPNpF6RbcGp9Au0m98yROFUSC90iW6zkfptBgPfcnqv4Jy9LCF+mA3/CXfD/mSBYz+J6IpyPvoGPIHs98PWK7IfoYYySO7UJEC0ZFuNw/ZquBK2zJ8ZToVu1XLnt5gfoN8YkUs42K/KFpO+DrQbeBKK6LoyicQCx7gtdz5i4/4GHQiY0MkL3irWLUPh0uPknipjW9U64OC4hDB; 25:7173vtjmWea54pzhhayqsxF60hgvCbHrCYSiwP8kDauflctqGG7t+DR9u6ei1TbGmzyd4Zf+eA2sQDzExO7U3hDn8zoVtM+0fPHxU8oorNsDvwNUruZOGp6Zw4AH84jxEuqREzP1DNZA1Gm9Pkwrd7F4yHQEAMi9GmriRmfbeTmJxE92Kyoj9RMgyG0o/qbRBh//JxxekdEElHplKEDdwmrUsJ1/jLppWoOjHZOaSinCx4tK4RBtQ3XSytzRpppjpkv4TtLX2f+5EqBXcq+Dcw2h1pANSe9luDMCyMAIC8HbMBWLmxrSKb0I91XzYfImzjovhoyI11PqzfZhweX8oQ==; 31:2M7sBg3UTTq6rDgURQUPRDRmPEy2fh0hEXIPjfZVzOARKE6WhK35qpqnlG0Yt2DaiS1Ed5QiOgxUcLZPpYda8kkcra9+D5F7nH4Q7+pyJUXwYmTTZUlsePkRvf4LJfJUXLY1XFLXWldf4HSAyp/CwqdnxLIPOp8WDM68HVKFRn5PsiFINQCbeES8EwmqGsGerWj7TuPoQO6kucnPBZFDXQCdPzLlUKNfA/bVhWPEZjI= X-MS-TrafficTypeDiagnostic: DB6PR0501MB2038: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2038; 20:5tM6KoqO9OF1Re70KP7SMza3sWAb+4eVnx60KAJayEXTDvwxX2aBEinndQfxUIC4cXhLAXlnMhgZh9URRK5bEaFy575jqPhJwHka4aDic0wVQeXn8qaDCbEP/sBAgEJcJUuBT2AvaAFi9n5uRj7YtPxIloKZFeg8xHFhLVAj4L4KVpL3W8cTKPPaw6pHEZ7xPk93HAADnJatwm5z3iQa6uoFz8vcixC9byAjNiNCwg6PZ6M3zb75ZceF3k1g94PW3C+otUtDCWbYaba+HyW9fcEhQFoYWMHIZgro+ojWMcwFoslaSjCF8GbDmjSjLWyL/eMa0CoddT2R5EAPquy1eaLUnyn3eGJOJ9uAqSM+sWmCg29S638YXu2yuyxtGzJL1mJSpd12X1VVfSEI5NeaNuwQ41IYqXdaqWRS4LeKaSnyNl3b78aZMVunDOoYZ/9jP1T3+XGoOnMkjDvY9B3bgB01OmojtAUYGIiyPy0jciV7U9e5rDsLs4CPHoDUoWNF; 4:W0u3wxJJEqrjPyIrlrx49xRBS0UL5yjgf6GdmbqdRgLsH9AoCb7SzDn73oHaLGCznynYjBjiwbHLiDICVJJhZYlD7rL0YWLVIHaqH2rsPANnDFu8p7Wtq4CHXdGyHgwGYdtANNyqQeonbHstujZjYhtFjdmRhvyJcZw8HBHRUtmy2+QaNE0io+Q1oOJapsd5SyH0klyUQHHTmFbjIAIJhB1OxmNw/ElXU05m4VspDEn4b2vkNhyecRyUPlqvDGqOqja4+qSGRntyhzGbyht1dOQEMQ1F3nlVrDqxygQ/tpei9CJl8LqPmQAxkokRTJiAktCH8Gd8OYOin1kR3kswOWQSJGoQQ6fn3cnOHbvKlBp1AjeBKoDxzl4MYIcTYuDaH3RgcPaf5M6AnpB992VxTFacQoOeDC3nshfOkXGdEC8= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(189930954265078)(15185016700835)(45079756050767)(17755550239193); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(93001095)(3231221)(944501327)(52105095)(3002001)(10201501046)(6055026)(6041310)(20161123560045)(20161123562045)(20161123558120)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:DB6PR0501MB2038; BCL:0; PCL:0; RULEID:; SRVR:DB6PR0501MB2038; X-Forefront-PRVS: 06400060E1 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(376002)(39850400004)(39380400002)(396003)(346002)(366004)(199004)(189003)(956004)(54906003)(26005)(7696005)(81156014)(45080400002)(81166006)(52116002)(11346002)(47776003)(446003)(8676002)(50466002)(6666003)(68736007)(4326008)(186003)(6306002)(9686003)(16526019)(966005)(86362001)(6916009)(6246003)(106356001)(5890100001)(98436002)(316002)(55016002)(305945005)(93886005)(2906002)(3846002)(1076002)(59450400001)(53936002)(8936002)(16586007)(58126008)(5660300001)(229853002)(476003)(478600001)(66066001)(76176011)(33896004)(386003)(25786009)(486006)(6116002)(7736002)(97736004)(105586002)(23726003)(6506007)(33656002)(18370500001)(19627235001); DIR:OUT; SFP:1101; SCL:1; SRVR:DB6PR0501MB2038; H:yongseok-MBP.local; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DB6PR0501MB2038; 23:BVXxYTmmjqG7fGeQCMVhLjd19HdUmgSLJTSMzUG?= =?us-ascii?Q?M8cpMh18b/nK5yWl+NRJ1O4pusvnmYLxhCt80r6N0tGlblQvIP2pgqWPvhrc?= =?us-ascii?Q?omYkY0wfqfeskXF3WOl1DI/T2auPRiP+w7IUUbzdEWKHlUArIJ8omMcChY7N?= =?us-ascii?Q?EzeSAr+CHlDqmD3vC6+r0r0ILyId215nM4MPWt72lDOyTAyirxJ8/qhFoajD?= =?us-ascii?Q?QWoXd14YH21xOekaolwgLaTvMeVO2zM+BxKveFTHP86ex4ud5IWApK1FyCOk?= =?us-ascii?Q?/xdk6joC0z+Lsw+9y3wxsAG13O+MxRZOPY3Aj0z29ZFmXOqm2IxKRbj25dqD?= =?us-ascii?Q?ktnBbpQJMXziLBQh+WvqF4hFBsa/hc5ZfaXnLzcwy9pcJLy201amHQreq2D3?= =?us-ascii?Q?AOws2ZrWbqx8LJRtxR02fbUFitrOVCpseYzskBig0tuCrhNKW+vBPRokKjJs?= =?us-ascii?Q?zKGUoBCG8WkYGpqxpXC0If+gJVSeJ8oT3EeqCCJsxc8XgNuacQIvCQo/BaTp?= =?us-ascii?Q?/MxBapeURtqr560vpY2eM2WHLarFQJcj10xzsyAXrKdrZVh5vPpm30fIAp0F?= =?us-ascii?Q?hEG2g9Ab9dZy0mr5CEbavT45MZYagphPYCEp8DGEn/selAfhDyj82jVY6UED?= =?us-ascii?Q?u+wN76wiiUGi4CgewtIC7+HpXg3HccL0SsK2Bj+M4No1A8A6PG6qXdfd4PKZ?= =?us-ascii?Q?EOWkK9Jtra5mXSzvE4+Jf4rwbnL1HdHke3/0vCiGxCXJTCajOmlcM0DOetS2?= =?us-ascii?Q?aI4v7xTNd8V6uZzehizBRGgdjbc4eZo7S4jF5tPGXH7dAeDcRzT7M/DBLqWG?= =?us-ascii?Q?2vEb8q2MEZFLHAFzuNVvUHKNcFsNlX2tILA+lfmBGVQDaitbkALyYzctqxW1?= =?us-ascii?Q?7hAL1tjeJVbALlvD5uiGKWgW/HXjYpcWDWnNoINL2sxd/H+Ggv3dIHwjCHlr?= =?us-ascii?Q?UvQ2+fT0+6ubAuT/WDjGA7JoePKK/HA5wrANYUcaDuaqiiuC2w+w/YTSJKHh?= =?us-ascii?Q?zsUupkG7e+QqGSPAYIEy0+QC37FreD/SK48jbsX4w+B+FPFWzFUhOtpMsHMQ?= =?us-ascii?Q?hf7pDbEA7nWGFff210ULxaLC1e1GlUePLE3atr1M7/FTi1Lpbz4WvAlmqRWi?= =?us-ascii?Q?VI1gW7pJKDYmZcpP3F+f/VqyOUUpcZNAk1L+vy5LPfuidstbQc2GKGNpVOqw?= =?us-ascii?Q?tpczPXtMPq0Ya9EV+YBWz/4FFIhBfC0R1nCNIcSZSC1gsjckk4pDMr+Di2Kl?= =?us-ascii?Q?h/h9pIDrXeEr8RBDqOHXnsxHKI9I+uram/omKXUcH9unZEJ4jDpm7owK6dLV?= =?us-ascii?Q?+F1j4gxOqTIE5l9hyygulJdKCyK9j/HJfyq4jBVM1EQW+a+PY2jhtSbIJ6VY?= =?us-ascii?Q?4mdGe+K5wjugEHeRPzmxXRv+L6okCtm/blnTjbr7wSvtX/TN1gSuPD8KEred?= =?us-ascii?Q?Jn9m8IRWmL+Blg78vIKckKKD31aGG7kJcweT43QD7ajd7CKnjWmVq?= X-Microsoft-Antispam-Message-Info: Eq/yYb34Kpzdd0Jz77lvB3CB0906z0Qs0Rq34x8qO0SjhtsvKhyiXl0jrv1q2oHlenSB60b0G31pzqRSqbxHzh4eXDo0Tut2tYBvincgZWfDc73jZi5T1ElfFA+Q+HNG8lWvezpyGLuG+qCqUmUnL1i1Dw+Z0tE/bOxdMRrwOJpV5zgrmLiJWh2N2/owFlcI X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2038; 6:Vok3diwlPU7fKiXl5w6XjhO9uCX+OuPBPZeR6n5tlETrVbHoY1OkwEB7PwOUWzzPs3OeLlKlDLq3awBgrkZkXY8D9Gwa5DJcOKs+tac7wOPqa2W+IWv+UXieXAOF4cnut68BWKlivDPJdpr5YnxWPr1fS4dQf98SwLMbHGi3bylb847HF/23WcQ1/5b3oTUC8FjqSJkIfcUJqCXovCuero098XTNktNUEDpFseLao8/eLzliXNlmhb1O3mWoKecex/DI14dKWYnsftig4uJOeZs3w6TsD5uHpCN2pzUleEbYxJqMgX/CCktk5Sm0bhZ7v2h5Ga8Q9T+UhtJME2E1fKhFzzj9aXhukc9cA3peVxW7gtuH2xVUuTPJMTFqcQxv4wMW1NeA+Xovrt+mca+uOXrEGTlx95Xy92mhutPhuV7b9TV1YnEKNQVPpF69bzaTophiQNlSFFtRxdkj3tn8Fg==; 5:bRRlUKmR9R6NyNj9lCfHWsAi9CQWNvwCN5fbLErDxLUKmo1j/RR4CzAK+oMZe1phsrmGAUPbBZdFtp4NoVHlCbVE9VfbvPYkf7R0AFYVRHaZGS+HDNco88ajFxqvLXL8wdDi77ZFOQbS4lREuX+3Tx1oDhoFmHsNP3P23n+X5T0=; 24:Z0S2Y97Cdq0YU1FvrGp48ktUfT3VIaKx4bTUTpjciPy1Mk8BMLDo6dsxJinhS/7dJzqXQRpG5I9DW/E8cJSeval5LQtuJwrVeYzU1FphfY0= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DB6PR0501MB2038; 7:7mvWilAE0LGkZdUi33DxNcx/+rFsZviI7SpMb4uk/ZY9IXWKRzA2XtpAp9Zz8zX4pyZxcnKf3k0tNZHZBumWnwpZ8jK0K7VeJc9ecFOzQUMvqFvtkUE7FCgwMHXqHNCXYPAvGzBRuoD+u5sP7UvVK5LFBDo08LzqZAdm1f1rI7W+7VnESMJsuRm2qApzCrsDkUkhaT/9BI7D8nzXnlnstu5S5oUcVvRlDD1ezu6qcs8Lt6AiUTroSyefxkvEOabv X-MS-Office365-Filtering-Correlation-Id: 7112889b-cae3-4169-b66a-08d5a0a76584 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2018 18:58:35.1440 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7112889b-cae3-4169-b66a-08d5a0a76584 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0501MB2038 Subject: Re: [dpdk-dev] [PATCH v2 1/6] mbuf: add buffer offset field for flexible indirection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Apr 2018 18:58:38 -0000 On Thu, Apr 12, 2018 at 04:34:56PM +0000, Ananyev, Konstantin wrote: > > > > > > > > > > > > > > > On Mon, Apr 09, 2018 at 06:04:34PM +0200, Olivier Matz wrote: > > > > > > > Hi Yongseok, > > > > > > > > > > > > > > On Tue, Apr 03, 2018 at 05:12:06PM -0700, Yongseok Koh wrote: > > > > > > > > On Tue, Apr 03, 2018 at 10:26:15AM +0200, Olivier Matz wrote: > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > On Mon, Apr 02, 2018 at 11:50:03AM -0700, Yongseok Koh wrote: > > > > > > > > > > When attaching a mbuf, indirect mbuf has to point to start of buffer of > > > > > > > > > > direct mbuf. By adding buf_off field to rte_mbuf, this becomes more > > > > > > > > > > flexible. Indirect mbuf can point to any part of direct mbuf by calling > > > > > > > > > > rte_pktmbuf_attach_at(). > > > > > > > > > > > > > > > > > > > > Possible use-cases could be: > > > > > > > > > > - If a packet has multiple layers of encapsulation, multiple indirect > > > > > > > > > > buffers can reference different layers of the encapsulated packet. > > > > > > > > > > - A large direct mbuf can even contain multiple packets in series and > > > > > > > > > > each packet can be referenced by multiple mbuf indirections. > > > > > > > > > > > > > > > > > > > > Signed-off-by: Yongseok Koh > > > > > > > > > > > > > > > > > > I think the current API is already able to do what you want. > > > > > > > > > > > > > > > > > > 1/ Here is a mbuf m with its data > > > > > > > > > > > > > > > > > > off > > > > > > > > > <--> > > > > > > > > > len > > > > > > > > > +----+ <----------> > > > > > > > > > | | > > > > > > > > > +-|----v----------------------+ > > > > > > > > > | | -----------------------| > > > > > > > > > m | buf | XXXXXXXXXXX || > > > > > > > > > | -----------------------| > > > > > > > > > +-----------------------------+ > > > > > > > > > > > > > > > > > > > > > > > > > > > 2/ clone m: > > > > > > > > > > > > > > > > > > c = rte_pktmbuf_alloc(pool); > > > > > > > > > rte_pktmbuf_attach(c, m); > > > > > > > > > > > > > > > > > > Note that c has its own offset and length fields. > > > > > > > > > > > > > > > > > > > > > > > > > > > off > > > > > > > > > <--> > > > > > > > > > len > > > > > > > > > +----+ <----------> > > > > > > > > > | | > > > > > > > > > +-|----v----------------------+ > > > > > > > > > | | -----------------------| > > > > > > > > > m | buf | XXXXXXXXXXX || > > > > > > > > > | -----------------------| > > > > > > > > > +------^----------------------+ > > > > > > > > > | > > > > > > > > > +----+ > > > > > > > > > indirect | > > > > > > > > > +-|---------------------------+ > > > > > > > > > | | -----------------------| > > > > > > > > > c | buf | || > > > > > > > > > | -----------------------| > > > > > > > > > +-----------------------------+ > > > > > > > > > > > > > > > > > > off len > > > > > > > > > <--><----------> > > > > > > > > > > > > > > > > > > > > > > > > > > > 3/ remove some data from c without changing m > > > > > > > > > > > > > > > > > > rte_pktmbuf_adj(c, 10) // at head > > > > > > > > > rte_pktmbuf_trim(c, 10) // at tail > > > > > > > > > > > > > > > > > > > > > > > > > > > Please let me know if it fits your needs. > > > > > > > > > > > > > > > > No, it doesn't. > > > > > > > > > > > > > > > > Trimming head and tail with the current APIs removes data and make the space > > > > > > > > available. Adjusting packet head means giving more headroom, not shifting the > > > > > > > > buffer itself. If m has two indirect mbufs (c1 and c2) and those are pointing to > > > > > > > > difference offsets in m, > > > > > > > > > > > > > > > > rte_pktmbuf_adj(c1, 10); > > > > > > > > rte_pktmbuf_adj(c2, 20); > > > > > > > > > > > > > > > > then the owner of c2 regard the first (off+20)B as available headroom. If it > > > > > > > > wants to attach outer header, it will overwrite the headroom even though the > > > > > > > > owner of c1 is still accessing it. Instead, another mbuf (h1) for the outer > > > > > > > > header should be linked by h1->next = c2. > > > > > > > > > > > > > > Yes, after these operations c1, c2 and m should become read-only. So, to > > > > > > > prepend headers, another mbuf has to be inserted before as you suggest. It > > > > > > > is possible to wrap this in a function rte_pktmbuf_clone_area(m, offset, > > > > > > > length) that will: > > > > > > > - alloc and attach indirect mbuf for each segment of m that is > > > > > > > in the range [offset : length+offset]. > > > > > > > - prepend an empty and writable mbuf for the headers > > > > > > > > > > > > > > > If c1 and c2 are attached with shifting buffer address by adjusting buf_off, > > > > > > > > which actually shrink the headroom, this case can be properly handled. > > > > > > > > > > > > > > What do you mean by properly handled? > > > > > > > > > > > > > > Yes, prepending data or adding data in the indirect mbuf won't override > > > > > > > the direct mbuf. But prepending data or adding data in the direct mbuf m > > > > > > > won't be protected. > > > > > > > > > > > > > > From an application point of view, indirect mbufs, or direct mbufs that > > > > > > > have refcnt != 1, should be both considered as read-only because they > > > > > > > may share their data. How an application can know if the data is shared > > > > > > > or not? > > > > > > > > > > > > > > Maybe we need a flag to differentiate mbufs that are read-only > > > > > > > (something like SHARED_DATA, or simply READONLY). In your case, if my > > > > > > > understanding is correct, you want to have indirect mbufs with RW data. > > > > > > > > > > > > Agree that indirect mbuf must be treated as read-only, Then the current code is > > > > > > enough to handle that use-case. > > > > > > > > > > > > > > And another use-case (this is my actual use-case) is to make a large mbuf have > > > > > > > > multiple packets in series. AFAIK, this will also be helpful for some FPGA NICs > > > > > > > > because it transfers multiple packets to a single large buffer to reduce PCIe > > > > > > > > overhead for small packet traffic like the Multi-Packet Rx of mlx5 does. > > > > > > > > Otherwise, packets should be memcpy'd to regular mbufs one by one instead of > > > > > > > > indirect referencing. > > > > > > > > > > But just to make HW to RX multiple packets into one mbuf, > > > > > data_off inside indirect mbuf should be enough, correct? > > > > Right. Current max buffer len of mbuf is 64kB (16bits) but it is enough for mlx5 > > > > to reach to 100Gbps with 64B traffic (149Mpps). I made mlx5 HW put 16 packets in > > > > a buffer. So, it needs ~32kB buffer. Having more bits in length fields would be > > > > better but 16-bit is good enough to overcome the PCIe Gen3 bottleneck in order > > > > to saturate the network link. > > > > > > There were few complains that 64KB max is a limitation for some use-cases. > > > I am not against increasing it, but I don't think we have free space on first cache-line for that > > > without another big rework of mbuf layout. > > > Considering that we need to increase size for buf_len, data_off, data_len, and probably priv_size too. > > > > > > > > > > > > As I understand, what you'd like to achieve with this new field - > > > > > ability to manipulate packet boundaries after RX, probably at upper layer. > > > > > As Olivier pointed above, that doesn't sound as safe approach - as you have multiple > > > > > indirect mbufs trying to modify same direct buffer. > > > > > > > > I agree that there's an implication that indirect mbuf or mbuf having refcnt > 1 > > > > is read-only. What that means, all the entities which own such mbufs have to be > > > > aware of that and keep the principle as DPDK can't enforce the rule and there > > > > can't be such sanity check. In this sense, HW doesn't violate it because the > > > > direct mbuf is injected to HW before indirection. When packets are written by > > > > HW, PMD attaches indirect mbufs to the direct mbuf and deliver those to > > > > application layer with freeing the original direct mbuf (decrement refcnt by 1). > > > > So, HW doesn't touch the direct buffer once it reaches to upper layer. > > > > > > Yes, I understand that. But as I can see you introduced functions to adjust head and tail, > > > which implies that it should be possible by some entity (upper layer?) to manipulate these > > > indirect mbufs. > > > And we don't know how exactly it will be done. > > > > That's a valid concern. I can make it private by merging into the _attach_to() > > func, or I just can add a comment in the API doc. However, if users are aware > > that a mbuf is read-only and we expect them to keep it intact by their own > > judgement, they would/should not use those APIs. We can't stop them modifying > > content or the buffer itself anyway. Will add more comments of this discussion > > regarding read-only mode. > > Ok, so these functions are intended to be used only by PMD level? > But in that case do you need them at all? > Isn't it possible implement same thing with just data_off? > I mean your PMD knows in advance what is the buf_len of mbuf and at startup > time it can decide it going to slice it how to slice it into multiple packets. > So each offset is known in advance and you don't need to worry that you'll overwrite > neighbor packet's data. Since Olivier's last comment, I've been thinking about the approach all over again. It looks like I'm trapped in self-contradiction. The reason why I didn't want to use data_off was to provide valid headroom for each Rx packet and let users freely write the headroom. But, given that indirect mbuf should be considered read-only, this isn't a right approach. Instead of slicing a buffer with mbuf indirection and manipulating boundaries, the idea of external data (as Olivier suggested) would fit better. Even though it is more complex, it is doable. I summarized ideas yesterday and will come up with a new patch soon. Briefly, I think reserved bit 61 of ol_flags can be used to indicate externally attached mbuf. The following is my initial thought. #define EXT_ATTACHED_MBUF (1ULL << 61) struct rte_pktmbuf_ext_shared_info { refcnt; *free_cb(); *opaque /* arg for free_cb() */ } rte_pktmbuf_get_ext_shinfo() { /* Put shared info at the end of external buffer */ return (struct rte_pktmbuf_ext_shared_info *)(m->buf_addr + m->buf_len); } rte_pktmbuf_attach_ext_buf(m, buf_addr, buf_len, free_cb, opaque) { struct rte_pktmbuf_ext_shared_info *shinfo; m->buf_addr = buf_addr; m->buf_iova = rte_mempool_virt2iova(buf_addr); /* Have to add some calculation for alignment */ m->buf_len = buf_len - sizeof (*shinfo); shinfo = m->buf_addr + m->buf_len; ... m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, (uint16_t)m->buf_len); m->ol_flags |= EXT_ATTACHED_MBUF; atomic set shinfo->refcnt = 1; shinfo->free_cb = free_cb; shinfo->opaque = opaque; ... } rte_pktmbuf_detach_ext_buf(m) #define RTE_MBUF_EXT(mb) ((mb)->ol_flags & EXT_ATTACHED_MBUF) In rte_pktmbuf_prefree_seg(), if (RTE_MBUF_INDIRECT(m)) rte_pktmbuf_detach(m); else if (RTE_MBUF_EXT(m)) rte_pktmbuf_detach_ext_buf(m); And in rte_pktmbuf_attach(), if the mbuf attaching to is externally attached, then just increase refcnt in shinfo so that multiple mbufs can refer to the same external buffer. Please feel free to share any concern/idea. > > > > The direct buffer will be freed and get available for reuse when all the attached > > > > indirect mbufs are freed. > > > > > > > > > Though if you really need to do that, why it can be achieved by updating buf_len and priv_size > > > > > Fields for indirect mbufs, straight after attach()? > > > > > > > > Good point. > > > > Actually that was my draft (Mellanox internal) version of this patch :-) But I > > > > had to consider a case where priv_size is really given by user. Even though it > > > > is less likely, but if original priv_size is quite big, it can't cover entire > > > > buf_len. For this, I had to increase priv_size to 32-bit but adding another > > > > 16bit field (buf_off) looked more plausible. > > > > > > As I remember, we can't have mbufs bigger then 64K, > > > so priv_size + buf_len should be always less than 64K, correct? > > > > Can you let me know where I can find the constraint? I checked > > rte_pktmbuf_pool_create() and rte_pktmbuf_init() again to not make any mistake > > but there's no such limitation. > > > > elt_size = sizeof(struct rte_mbuf) + (unsigned)priv_size + > > (unsigned)data_room_size; > > > Ok I scanned through librte_mbuf and didn't find any limitations. > Seems like a false impression from my side. > Anyway that seems like a corner case to have priv_szie + buf_len >64KB. > Do you really need to support it? If a user must have 64kB buffer (it's valid, no violation) and the priv_size is just a few bytes. Then, does library have to force the user to sacrifice a few bytes for priv_size? Do you think it's a corner case? Still using priv_size doesn't seem to be a good idea. Yongseok > > The max of data_room_size is 64kB, so is priv_size. m->buf_addr starts from 'm + > > sizeof(*m) + priv_size' and m->buf_len can't be larger than UINT16_MAX. So, > > priv_size couldn't be used for this purpose. > > > > Yongseok > > > > > > > > > > > > > > > > > > Does this make sense? > > > > > > > > > > > > > > I understand the need. > > > > > > > > > > > > > > Another option would be to make the mbuf->buffer point to an external > > > > > > > buffer (not inside the direct mbuf). This would require to add a > > > > > > > mbuf->free_cb. See "Mbuf with external data buffer" (page 19) in [1] for > > > > > > > a quick overview. > > > > > > > > > > > > > > [1] > > > > > > > > > > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpdksummit.com%2FArchive%2Fpdf%2F2016Userspace%2FDay01 > > > > > > -Session05-OlivierMatz- > > > > > > > > > > > > Userspace2016.pdf&data=02%7C01%7Cyskoh%40mellanox.com%7Ca5405edb36e445e6540808d59e339a38%7Ca652971c7d2e4d9ba6a4d > > > > > > 149256f461b%7C0%7C0%7C636588866861082855&sdata=llw%2BwiY5cC56naOUhBbIg8TKtfFN6VZcIRY5PV7VqZs%3D&reserved=0 > > > > > > > > > > > > > > The advantage is that it does not require the large data to be inside a > > > > > > > mbuf (requiring a mbuf structure before the buffer, and requiring to be > > > > > > > allocated from a mempool). On the other hand, it is maybe more complex > > > > > > > to implement compared to your solution. > > > > > > > > > > > > I knew that you presented the slides and frankly, I had considered that option > > > > > > at first. But even with that option, metadata to store refcnt should also be > > > > > > allocated and managed anyway. Kernel also maintains the skb_shared_info at the > > > > > > end of the data segment. Even though it could have smaller metadata structure, > > > > > > I just wanted to make full use of the existing framework because it is less > > > > > > complex as you mentioned. Given that you presented the idea of external data > > > > > > buffer in 2016 and there hasn't been many follow-up discussions/activities so > > > > > > far, I thought the demand isn't so big yet thus I wanted to make this patch > > > > > > simpler. I personally think that we can take the idea of external data seg when > > > > > > more demands come from users in the future as it would be a huge change and may > > > > > > break current ABI/API. When the day comes, I'll gladly participate in the > > > > > > discussions and write codes for it if I can be helpful. > > > > > > > > > > > > Do you think this patch is okay for now? > > > > > > > > > > > > > > > > > > Thanks for your comments, > > > > > > Yongseok