patches for DPDK stable branches
 help / color / mirror / Atom feed
From: wangyunjian <wangyunjian@huawei.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "keith.wiles@intel.com" <keith.wiles@intel.com>,
	"ophirmu@mellanox.com" <ophirmu@mellanox.com>,
	"Lilijun (Jerry)" <jerry.lilijun@huawei.com>,
	xudingke <xudingke@huawei.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] net/tap: free mempool when closing
Date: Thu, 6 Aug 2020 13:35:35 +0000	[thread overview]
Message-ID: <34EFBCA9F01B0748BEB6B629CE643AE60D1112C4@DGGEMM533-MBX.china.huawei.com> (raw)
In-Reply-To: <955bd4da-7549-f04a-4edb-6ae4534cb25f@intel.com>

> -----Original Message-----
> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> Sent: Thursday, August 6, 2020 9:04 PM
> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry)
> <jerry.lilijun@huawei.com>; xudingke <xudingke@huawei.com>;
> stable@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
> 
> On 8/6/2020 1:45 PM, wangyunjian wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit [mailto:ferruh.yigit@intel.com]
> >> Sent: Thursday, August 6, 2020 12:36 AM
> >> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
> >> Cc: keith.wiles@intel.com; ophirmu@mellanox.com; Lilijun (Jerry)
> >> <jerry.lilijun@huawei.com>; xudingke <xudingke@huawei.com>;
> >> stable@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH] net/tap: free mempool when closing
> >>
> >> On 7/29/2020 12:35 PM, wangyunjian wrote:
> >>> From: Yunjian Wang <wangyunjian@huawei.com>
> >>>
> >>> When setup tx queues, we will create a mempool for the 'gso_ctx'.
> >>> The mempool is not freed when closing tap device. If free the tap
> >>> device and create it with different name, it will create a new
> >>> mempool. This maybe cause an OOM.
> >>>
> >>> Fixes: 050316a88313 ("net/tap: support TSO (TCP Segment Offload)")
> >>> Cc: stable@dpdk.org
> >>>
> >>> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
> >>
> >> <...>
> >>
> >>> @@ -1317,26 +1320,31 @@ tap_gso_ctx_setup(struct rte_gso_ctx
> >>> *gso_ctx,
> >> struct rte_eth_dev *dev)
> >>>  {
> >>>  	uint32_t gso_types;
> >>>  	char pool_name[64];
> >>> -
> >>> -	/*
> >>> -	 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE bytes
> >>> -	 * size per mbuf use this pool for both direct and indirect mbufs
> >>> -	 */
> >>> -
> >>> -	struct rte_mempool *mp;      /* Mempool for GSO packets */
> >>> +	struct pmd_internals *pmd = dev->data->dev_private;
> >>> +	int ret;
> >>>
> >>>  	/* initialize GSO context */
> >>>  	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
> >>> -	snprintf(pool_name, sizeof(pool_name), "mp_%s",
> dev->device->name);
> >>> -	mp = rte_mempool_lookup((const char *)pool_name);
> >>> -	if (!mp) {
> >>> -		mp = rte_pktmbuf_pool_create(pool_name,
> TAP_GSO_MBUFS_NUM,
> >>> -			TAP_GSO_MBUF_CACHE_SIZE, 0,
> >>> +	if (!pmd->gso_ctx_mp) {
> >>> +		/*
> >>> +		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
> >>> +		 * bytes size per mbuf use this pool for both direct and
> >>> +		 * indirect mbufs
> >>> +		 */
> >>> +		ret = snprintf(pool_name, sizeof(pool_name), "mp_%s",
> >>> +				dev->device->name);
> >>> +		if (ret < 0 || ret >= (int)sizeof(pool_name)) {
> >>> +			TAP_LOG(ERR,
> >>> +				"%s: failed to create mbuf pool "
> >>> +				"name for device %s\n",
> >>> +				pmd->name, dev->device->name);
> >>
> >> Overall looks good. Only above error doesn't say why it failed,
> >> informing the user that device name is too long may help her to overcome
> the error.
> >
> > I found that the return value of functions snprintf was not checked
> > when modifying the code, so fixed it.
> > I think it maybe fail, because the max device name length is
> > RTE_DEV_NAME_MAX_LEN(64).
> 
> +1 to the check.
> My comment was on the log message, which says "failed to create mbuf pool",
> but it doesn't say it is failed because of long device name.
> If user knows the reason of the failure, can prevent it by providing shorter
> device name.
> My suggestion is update the error log message to have the reason of failure.

Thanks for your suggestion, will include them in next version.

> 
> >
> > Do I need to split into two patches?
> 
> I think OK to have the change in this patch.

OK

> 
> >
> > Thanks,
> > Yunjian
> >


  reply	other threads:[~2020-08-06 13:36 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-29 11:35 wangyunjian
2020-08-05 13:47 ` Thomas Monjalon
2020-08-06 12:47   ` wangyunjian
2020-08-06 13:19     ` Thomas Monjalon
2020-08-28 12:51       ` wangyunjian
2020-08-05 16:35 ` Ferruh Yigit
2020-08-06 12:45   ` wangyunjian
2020-08-06 13:04     ` Ferruh Yigit
2020-08-06 13:35       ` wangyunjian [this message]
2020-08-08  9:58 ` [dpdk-stable] [dpdk-dev] [PATCH v2] " wangyunjian
2020-09-14 14:43   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=34EFBCA9F01B0748BEB6B629CE643AE60D1112C4@DGGEMM533-MBX.china.huawei.com \
    --to=wangyunjian@huawei.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jerry.lilijun@huawei.com \
    --cc=keith.wiles@intel.com \
    --cc=ophirmu@mellanox.com \
    --cc=stable@dpdk.org \
    --cc=xudingke@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).