From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23A26A0353; Thu, 6 Aug 2020 15:04:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6B77E2C28; Thu, 6 Aug 2020 15:04:17 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 941202BF2; Thu, 6 Aug 2020 15:04:14 +0200 (CEST) IronPort-SDR: J3tVyXHqfbxOLAvnBXuX+BjWI9VQnBiSi5YAfN6esaTd5zmNptgqP9k68S91v4QJmmq58f4KGk cglC/zLaBf/A== X-IronPort-AV: E=McAfee;i="6000,8403,9704"; a="132876782" X-IronPort-AV: E=Sophos;i="5.75,441,1589266800"; d="scan'208";a="132876782" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2020 06:04:12 -0700 IronPort-SDR: jv7aWqfktjPeNNei19z1AJvYZSzf43NWl75GFH2QVtGla3T7416AB9XHokEKLmitJ2njNcQ64C sbN8gLfvtK+g== X-IronPort-AV: E=Sophos;i="5.75,441,1589266800"; d="scan'208";a="493657875" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.213.255.242]) ([10.213.255.242]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2020 06:04:08 -0700 To: wangyunjian , "dev@dpdk.org" Cc: "keith.wiles@intel.com" , "ophirmu@mellanox.com" , "Lilijun (Jerry)" , xudingke , "stable@dpdk.org" References: <40a0e68ed41b05fba8cbe5f34e369a59a1c0c09c.1596022448.git.wangyunjian@huawei.com> <34EFBCA9F01B0748BEB6B629CE643AE60D11126E@DGGEMM533-MBX.china.huawei.com> From: Ferruh Yigit Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJsBBMBCgBWAhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEABQkKqZZ8FiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl6ha3sXGHZrczovL2tl eXMub3BlbnBncC5vcmcACgkQ+TPrQ98TYR8uLA//QwltuFliUWe60xwmu9sY38c1DXvX67wk UryQ1WijVdIoj4H8cf/s2KtyIBjc89R254KMEfJDao/LrXqJ69KyGKXFhFPlF3VmFLsN4XiT PSfxkx8s6kHVaB3O183p4xAqnnl/ql8nJ5ph9HuwdL8CyO5/7dC/MjZ/mc4NGq5O9zk3YRGO lvdZAp5HW9VKW4iynvy7rl3tKyEqaAE62MbGyfJDH3C/nV/4+mPc8Av5rRH2hV+DBQourwuC ci6noiDP6GCNQqTh1FHYvXaN4GPMHD9DX6LtT8Fc5mL/V9i9kEVikPohlI0WJqhE+vQHFzR2 1q5nznE+pweYsBi3LXIMYpmha9oJh03dJOdKAEhkfBr6n8BWkWQMMiwfdzg20JX0o7a/iF8H 4dshBs+dXdIKzPfJhMjHxLDFNPNH8zRQkB02JceY9ESEah3wAbzTwz+e/9qQ5OyDTQjKkVOo cxC2U7CqeNt0JZi0tmuzIWrfxjAUulVhBmnceqyMOzGpSCQIkvalb6+eXsC9V1DZ4zsHZ2Mx Hi+7pCksdraXUhKdg5bOVCt8XFmx1MX4AoV3GWy6mZ4eMMvJN2hjXcrreQgG25BdCdcxKgqp e9cMbCtF+RZax8U6LkAWueJJ1QXrav1Jk5SnG8/5xANQoBQKGz+yFiWcgEs9Tpxth15o2v59 gXK5Ag0EV9ZMvgEQAKc0Db17xNqtSwEvmfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ES YpV8QWj0xK4YM0dLxnDU2IYxjEshSB1TqAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4Ai bPtrHuIXWQOBECcVZTTOdZYGAzaYzxiAONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxD UQljeNvKYt1lZE/gAUUxNLWsYyTT+22/vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/ 3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35piVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVj sM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQI3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdc q9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYHfVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH7 1PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZqw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFB VOQOxCvwRG2QCgcJ/UTn5vlivul+cThi6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI 8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJlRr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYC GwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNhHwUCXqFrngUJCKxSYAAKCRD5M+tD3xNhH3YWD/9b cUiWaHJasX+OpiuZ1Li5GG3m9aw4lR/k2lET0UPRer2Jy1JsL+uqzdkxGvPqzFTBXgx/6Byz EMa2mt6R9BCyR286s3lxVS5Bgr5JGB3EkpPcoJT3A7QOYMV95jBiiJTy78Qdzi5LrIu4tW6H o0MWUjpjdbR01cnj6EagKrDx9kAsqQTfvz4ff5JIFyKSKEHQMaz1YGHyCWhsTwqONhs0G7V2 0taQS1bGiaWND0dIBJ/u0pU998XZhmMzn765H+/MqXsyDXwoHv1rcaX/kcZIcN3sLUVcbdxA WHXOktGTQemQfEpCNuf2jeeJlp8sHmAQmV3dLS1R49h0q7hH4qOPEIvXjQebJGs5W7s2vxbA 5u5nLujmMkkfg1XHsds0u7Zdp2n200VC4GQf8vsUp6CSMgjedHeF9zKv1W4lYXpHp576ZV7T GgsEsvveAE1xvHnpV9d7ZehPuZfYlP4qgo2iutA1c0AXZLn5LPcDBgZ+KQZTzm05RU1gkx7n gL9CdTzVrYFy7Y5R+TrE9HFUnsaXaGsJwOB/emByGPQEKrupz8CZFi9pkqPuAPwjN6Wonokv ChAewHXPUadcJmCTj78Oeg9uXR6yjpxyFjx3vdijQIYgi5TEGpeTQBymLANOYxYWYOjXk+ae dYuOYKR9nbPv+2zK9pwwQ2NXbUBystaGyQ== Message-ID: <955bd4da-7549-f04a-4edb-6ae4534cb25f@intel.com> Date: Thu, 6 Aug 2020 14:04:04 +0100 MIME-Version: 1.0 In-Reply-To: <34EFBCA9F01B0748BEB6B629CE643AE60D11126E@DGGEMM533-MBX.china.huawei.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 8/6/2020 1:45 PM, wangyunjian wrote: > > >> -----Original Message----- >> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com] >> Sent: Thursday, August 6, 2020 12:36 AM >> To: wangyunjian ; dev@dpdk.org >> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry) >> ; xudingke ; >> stable@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing >> >> On 7/29/2020 12:35 PM, wangyunjian wrote: >>> From: Yunjian Wang >>> >>> When setup tx queues, we will create a mempool for the 'gso_ctx'. >>> The mempool is not freed when closing tap device. If free the tap >>> device and create it with different name, it will create a new >>> mempool. This maybe cause an OOM. >>> >>> Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)") >>> Cc: stable@dpdk.org >>> >>> Signed-off-by: Yunjian Wang >> >> <...> >> >>> @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, >> struct rte_eth_dev *dev) >>> { >>> uint32_t gso_types; >>> char pool_name[64]; >>> - >>> - /* >>> - * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes >>> - * size per mbuf use this pool for both direct and indirect mbufs >>> - */ >>> - >>> - struct rte_mempool *mp; /* Mempool for GSO packets */ >>> + struct pmd_internals *pmd = dev->data->dev_private; >>> + int ret; >>> >>> /* initialize GSO context */ >>> gso_types = DEV_TX_OFFLOAD_TCP_TSO; >>> - snprintf(pool_name, sizeof(pool_name), "mp_%s", dev->device->name); >>> - mp = rte_mempool_lookup((const char *)pool_name); >>> - if (!mp) { >>> - mp = rte_pktmbuf_pool_create(pool_name, TAP_GSO_MBUFS_NUM, >>> - TAP_GSO_MBUF_CACHE_SIZE, 0, >>> + if (!pmd->gso_ctx_mp) { >>> + /* >>> + * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE >>> + * bytes size per mbuf use this pool for both direct and >>> + * indirect mbufs >>> + */ >>> + ret = snprintf(pool_name, sizeof(pool_name), "mp_%s", >>> + dev->device->name); >>> + if (ret < 0 || ret >= (int)sizeof(pool_name)) { >>> + TAP_LOG(ERR, >>> + "%s: failed to create mbuf pool " >>> + "name for device %s\n", >>> + pmd->name, dev->device->name); >> >> Overall looks good. Only above error doesn't say why it failed, informing the >> user that device name is too long may help her to overcome the error. > > I found that the return value of functions snprintf was not checked > when modifying the code, so fixed it. > I think it maybe fail, because the max device name length is > RTE_DEV_NAME_MAX_LEN(64). +1 to the check. My comment was on the log message, which says "failed to create mbuf pool", but it doesn't say it is failed because of long device name. If user knows the reason of the failure, can prevent it by providing shorter device name. My suggestion is update the error log message to have the reason of failure. > > Do I need to split into two patches? I think OK to have the change in this patch. > > Thanks, > Yunjian >