DPDK patches and discussions
 help / color / mirror / Atom feed
From: Slava Ovsiienko <viacheslavo@nvidia.com>
To: "Xueming(Steven) Li" <xuemingl@nvidia.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Xueming(Steven) Li" <xuemingl@nvidia.com>,
	 Matan Azrad <matan@nvidia.com>,
	Shahaf Shuler <shahafs@nvidia.com>
Subject: Re: [dpdk-dev] [PATCH] net/mlx5: probe LAG representor with PF1 PCI address
Date: Thu, 6 May 2021 11:27:16 +0000	[thread overview]
Message-ID: <DM6PR12MB3753A0746FCC073827DCBC82DF589@DM6PR12MB3753.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20210422072446.29455-1-xuemingl@nvidia.com>

Hi, Xuemig

This patch looks like a fix - could you, please add the "Fixes:" tag and modify the headline?

With best regards,
Slava

> -----Original Message-----
> From: Xueming Li <xuemingl@nvidia.com>
> Sent: Thursday, April 22, 2021 10:25
> Cc: dev@dpdk.org; Xueming(Steven) Li <xuemingl@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>
> Subject: [PATCH] net/mlx5: probe LAG representor with PF1 PCI address
> 
> In case of bonding, orchestrator wants to use same devargs for LAG and non-
> LAG scenario, to probe representor on PF1 using PF1 PCI address like
> "<DBDF_PF1>,representor=pf1vf[0-3]".
> 
> This patch changes PCI address check policy to allow PF1 PCI address for
> representors on PF1.
> 
> Note: detaching PF0 device can't remove representors on PF1. It's
> recommended to use primary(PF0) PCI address to probe representors on
> both PFs.
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/net/mlx5/linux/mlx5_os.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index 76c72d0e38..22271e289a 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1875,11 +1875,14 @@ mlx5_device_bond_pci_match(const struct
> ibv_device *ibv_dev,
>  				tmp_str);
>  			break;
>  		}
> -		/* Match PCI address. */
> +		/* Match PCI address, allows BDF0+pfx or BDFx+pfx. */
>  		if (pci_dev->domain == pci_addr.domain &&
>  		    pci_dev->bus == pci_addr.bus &&
>  		    pci_dev->devid == pci_addr.devid &&
> -		    pci_dev->function + owner == pci_addr.function)
> +		    ((pci_dev->function == 0 &&
> +		      pci_dev->function + owner == pci_addr.function) ||
> +		     (pci_dev->function == owner &&
> +		      pci_addr.function == owner)))
>  			pf = info.port_name;
>  		/* Get ifindex. */
>  		snprintf(tmp_str, sizeof(tmp_str),
> --
> 2.25.1


  reply	other threads:[~2021-05-06 11:27 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-22  7:24 Xueming Li
2021-05-06 11:27 ` Slava Ovsiienko [this message]
2021-05-10 13:13 ` [dpdk-dev] [PATCH v2] net/mlx5: fix LAG representor probe on PF1 PCI Xueming Li
2021-05-11  7:44   ` Slava Ovsiienko
2021-05-12 10:35     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR12MB3753A0746FCC073827DCBC82DF589@DM6PR12MB3753.namprd12.prod.outlook.com \
    --to=viacheslavo@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=xuemingl@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).