DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: wangyunjian <wangyunjian@huawei.com>, dev@dpdk.org
Cc: keith.wiles@intel.com, ophirmu@mellanox.com,
	jerry.lilijun@huawei.com, xudingke@huawei.com, stable@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
Date: Wed, 5 Aug 2020 17:35:49 +0100	[thread overview]
Message-ID: <a5329cff-d7a7-31f4-0fde-e258dde072d4@intel.com> (raw)
In-Reply-To: <40a0e68ed41b05fba8cbe5f34e369a59a1c0c09c.1596022448.git.wangyunjian@huawei.com>

On 7/29/2020 12:35 PM, wangyunjian wrote:
> From: Yunjian Wang <wangyunjian@huawei.com>
> 
> When setup tx queues, we will create a mempool for the 'gso_ctx'.
> The mempool is not freed when closing tap device. If free the tap
> device and create it with different name, it will create a new
> mempool. This maybe cause an OOM.
> 
> Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>

<...>

> @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
>  {
>  	uint32_t gso_types;
>  	char pool_name[64];
> -
> -	/*
> -	 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
> -	 * size per mbuf use this pool for both direct and indirect mbufs
> -	 */
> -
> -	struct rte_mempool *mp;      /* Mempool for GSO packets */
> +	struct pmd_internals *pmd = dev->data->dev_private;
> +	int ret;
>  
>  	/* initialize GSO context */
>  	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
> -	snprintf(pool_name, sizeof(pool_name), "mp_%s", dev->device->name);
> -	mp = rte_mempool_lookup((const char *)pool_name);
> -	if (!mp) {
> -		mp = rte_pktmbuf_pool_create(pool_name, TAP_GSO_MBUFS_NUM,
> -			TAP_GSO_MBUF_CACHE_SIZE, 0,
> +	if (!pmd->gso_ctx_mp) {
> +		/*
> +		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
> +		 * bytes size per mbuf use this pool for both direct and
> +		 * indirect mbufs
> +		 */
> +		ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
> +				dev->device->name);
> +		if (ret < 0 || ret >= (int)sizeof(pool_name)) {
> +			TAP_LOG(ERR,
> +				"%s: failed to create mbuf pool "
> +				"name for device %s\n",
> +				pmd->name, dev->device->name);

Overall looks good. Only above error doesn't say why it failed, informing the
user that device name is too long may help her to overcome the error.


  parent reply	other threads:[~2020-08-05 16:35 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-29 11:35 wangyunjian
2020-08-05 13:47 ` [dpdk-dev] [dpdk-stable] " Thomas Monjalon
2020-08-06 12:47   ` wangyunjian
2020-08-06 13:19     ` Thomas Monjalon
2020-08-28 12:51       ` wangyunjian
2020-09-01 10:57         ` Thomas Monjalon
2020-08-05 16:35 ` Ferruh Yigit [this message]
2020-08-06 12:45   ` [dpdk-dev] " wangyunjian
2020-08-06 13:04     ` Ferruh Yigit
2020-08-06 13:35       ` wangyunjian
2020-08-08  9:58 ` [dpdk-dev] [PATCH v2] " wangyunjian
2020-09-14 14:43   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5329cff-d7a7-31f4-0fde-e258dde072d4@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerry.lilijun@huawei.com \
    --cc=keith.wiles@intel.com \
    --cc=ophirmu@mellanox.com \
    --cc=stable@dpdk.org \
    --cc=wangyunjian@huawei.com \
    --cc=xudingke@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).