DPDK patches and discussions
 help / color / mirror / Atom feed
From: Odi Assli <odia@nvidia.com>
To: Tal Shnaiderman <talshn@nvidia.com>,
	Suanming Mou <suanmingm@nvidia.com>,
	 Slava Ovsiienko <viacheslavo@nvidia.com>,
	Matan Azrad <matan@nvidia.com>
Cc: Raslan Darawsheh <rasland@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix indexed pools allocate on Windows
Date: Wed, 21 Jul 2021 08:42:08 +0000	[thread overview]
Message-ID: <PH0PR12MB5449B1B05BDBECA3DB396D2DABE39@PH0PR12MB5449.namprd12.prod.outlook.com> (raw)
In-Reply-To: <DM4PR12MB5389E4D9E802B08B4BF9A67EA4E39@DM4PR12MB5389.namprd12.prod.outlook.com>



> -----Original Message-----
> From: Tal Shnaiderman <talshn@nvidia.com>
> Sent: Wednesday, July 21, 2021 11:40 AM
> To: Suanming Mou <suanmingm@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Odi Assli
> <odia@nvidia.com>
> Cc: Raslan Darawsheh <rasland@nvidia.com>; dev@dpdk.org
> Subject: RE: [PATCH] net/mlx5: fix indexed pools allocate on Windows
> 
> > Subject: [PATCH] net/mlx5: fix indexed pools allocate on Windows
> >
> > Currently, the flow indexed pools are allocated per port, the
> > allocation was missing in Windows code.
> >
> > This commit fixes the the Windows flow indexed pools are not allocated
> issue.
> >
> > Fixes: b4edeaf3efd5 ("net/mlx5: replace flow list with indexed pool")
> >
> > Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
> > ---
> >  drivers/net/mlx5/windows/mlx5_os.c | 47
> > ++++++++++++++++++++++++++++++
> >  1 file changed, 47 insertions(+)
> >
> > diff --git a/drivers/net/mlx5/windows/mlx5_os.c
> > b/drivers/net/mlx5/windows/mlx5_os.c
> > index 5da362a9d5..a31fafc90d 100644
> > --- a/drivers/net/mlx5/windows/mlx5_os.c
> > +++ b/drivers/net/mlx5/windows/mlx5_os.c
> > @@ -35,6 +35,44 @@ static const char *MZ_MLX5_PMD_SHARED_DATA =
> > "mlx5_pmd_shared_data";
> >  /* Spinlock for mlx5_shared_data allocation. */  static
> > rte_spinlock_t mlx5_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
> >
> > +/* rte flow indexed pool configuration. */ static struct
> > +mlx5_indexed_pool_config icfg[] = {
> > +	{
> > +		.size = sizeof(struct rte_flow),
> > +		.trunk_size = 64,
> > +		.need_lock = 1,
> > +		.release_mem_en = 0,
> > +		.malloc = mlx5_malloc,
> > +		.free = mlx5_free,
> > +		.per_core_cache = 0,
> > +		.type = "ctl_flow_ipool",
> > +	},
> > +	{
> > +		.size = sizeof(struct rte_flow),
> > +		.trunk_size = 64,
> > +		.grow_trunk = 3,
> > +		.grow_shift = 2,
> > +		.need_lock = 1,
> > +		.release_mem_en = 0,
> > +		.malloc = mlx5_malloc,
> > +		.free = mlx5_free,
> > +		.per_core_cache = 1 << 14,
> > +		.type = "rte_flow_ipool",
> > +	},
> > +	{
> > +		.size = sizeof(struct rte_flow),
> > +		.trunk_size = 64,
> > +		.grow_trunk = 3,
> > +		.grow_shift = 2,
> > +		.need_lock = 1,
> > +		.release_mem_en = 0,
> > +		.malloc = mlx5_malloc,
> > +		.free = mlx5_free,
> > +		.per_core_cache = 0,
> > +		.type = "mcp_flow_ipool",
> > +	},
> > +};
> > +
> >  /**
> >   * Initialize shared data between primary and secondary process.
> >   *
> > @@ -317,6 +355,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> >  	char name[RTE_ETH_NAME_MAX_LEN];
> >  	int own_domain_id = 0;
> >  	uint16_t port_id;
> > +	int i;
> >
> >  	/* Build device name. */
> >  	strlcpy(name, dpdk_dev->name, sizeof(name)); @@ -584,6 +623,14
> @@
> > mlx5_dev_spawn(struct rte_device *dpdk_dev,
> >  	mlx5_set_min_inline(spawn, config);
> >  	/* Store device configuration on private structure. */
> >  	priv->config = *config;
> > +	for (i = 0; i < MLX5_FLOW_TYPE_MAXI; i++) {
> > +		icfg[i].release_mem_en = !!config->reclaim_mode;
> > +		if (config->reclaim_mode)
> > +			icfg[i].per_core_cache = 0;
> > +		priv->flows[i] = mlx5_ipool_create(&icfg[i]);
> > +		if (!priv->flows[i])
> > +			goto error;
> > +	}
> >  	/* Create context for virtual machine VLAN workaround. */
> >  	priv->vmwa_context = NULL;
> >  	if (config->dv_flow_en) {
> > --
> > 2.25.1
> 
> Acked-by: Tal Shnaiderman <talshn@nvidia.com>
Tested-by: Odi Assli <odia@nvidia.com>

  reply	other threads:[~2021-07-21  8:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-21  8:34 Suanming Mou
2021-07-21  8:40 ` Tal Shnaiderman
2021-07-21  8:42   ` Odi Assli [this message]
2021-07-21  8:43 ` Matan Azrad
2021-07-22 14:16   ` Thomas Monjalon
2021-07-22  6:59 ` [dpdk-dev] [PATCH v2] net/mlx5: fix indexed pools allocation Suanming Mou
2021-07-22 14:18   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR12MB5449B1B05BDBECA3DB396D2DABE39@PH0PR12MB5449.namprd12.prod.outlook.com \
    --to=odia@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=talshn@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).