DPDK usage discussions
 help / color / mirror / Atom feed
From: Slava Ovsiienko <viacheslavo@nvidia.com>
To: "Lukáš Šišmiš" <sismis@cesnet.cz>, "users@dpdk.org" <users@dpdk.org>
Subject: RE: Determining vendor and model from the port ID
Date: Wed, 23 Apr 2025 12:29:58 +0000	[thread overview]
Message-ID: <DS0PR12MB8561407DB7D90A175DD57660DFBA2@DS0PR12MB8561.namprd12.prod.outlook.com> (raw)
In-Reply-To: <MN6PR12MB85673E2940EE558373821901DFAD2@MN6PR12MB8567.namprd12.prod.outlook.com>

Hi,

The first version: https://patches.dpdk.org/project/dpdk/patch/20250423122807.121990-1-viacheslavo@nvidia.com/

With best regards,
Slava

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Monday, March 31, 2025 4:54 PM
> To: Lukáš Šišmiš <sismis@cesnet.cz>; users@dpdk.org
> Subject: RE: Determining vendor and model from the port ID
> 
> Hi, Lukas
> 
> Thank you for the confirmation.
> 
> > Do you think this can be changed officially in the DPDK versions?
> Yes, now considering the official patch to have this mitigation.
> And, in addition, the conversion errors to warnings like
> "The requested queue capacity is not guaranteed".
> 
> With best regards,
> Slava
> 
> > -----Original Message-----
> > From: Lukáš Šišmiš <sismis@cesnet.cz>
> > Sent: Friday, March 28, 2025 2:52 PM
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>; users@dpdk.org
> > Subject: Re: Determining vendor and model from the port ID
> >
> > Hi Slava,
> >
> > thanks for your reply (and a detailed explanation!), the patch works and
> > I can see 32k-long RX/TX queues configured on the ConnectX-4 card.
> > (MCX416A-CCAT) - I tried it on DPDK v24.11.1
> > Do you think this can be changed officially in the DPDK versions?
> >
> > Thank you.
> >
> > Best,
> >
> > Lukas
> >
> > On 3/25/25 22:57, Slava Ovsiienko wrote:
> > > Hi, Lukas
> > >
> > > Some older NICs (depending on HW generation and on FW configuration)
> > may require
> > > the packet data to be inline (incapsulated) into the WQE (hardware
> > descriptor of the Tx queue)
> > > to allow some steering engine features work correctly. The minimal
> number
> > of bytes to be inline
> > > is obtained by DPDK from FW (it reports the needed L2/L3/L4/tunnel
> > headers and mlx5 PMD
> > > calculates the size in bytes), and the minimal is 18B (if any).
> > >
> > > Then, the Tx queue has absolute limitation for number of descriptors, for
> > CX4/5/6/7 it is 32K
> > > (typical value, also queried from FW (dev_cap.max_qp_wr). Application
> also
> > asks for the given
> > > Tx queue capacity, specifying the number of abstract descriptor, to make
> > sure Tx queue can store
> > > the given number of packets. For the maximal allowed queue size the
> > txq_calc_inline_max()
> > > routine returns 12B (MLX5_DSEG_MIN_INLINE_SIZE). This is the maximal
> > number of inline bytes
> > > to keep WQE size small enough and guarantee the requested number of
> > WQEs within queue
> > > of limited size.
> > >
> > > So, it seems ConnectX-4 on your site requires 18B of inline data, but for
> the
> > maximal queue size
> > > the WQE size is 64B and we can inline only 12B. The good news is that
> > txq_calc_inline_max()
> > > estimation is too conservative, and in this reality we can inline 18B. I think
> > we can replace the
> > > MLX5_DSEG_MIN_INLINE_SIZE with MLX5_ESEG_MIN_INLINE_SIZE in the
> > txq_calc_inline_max()
> > > and try. Could you, please ?
> > >
> > > diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> > > index 3e93517323..1913094e5c 100644
> > > --- a/drivers/net/mlx5/mlx5_txq.c
> > > +++ b/drivers/net/mlx5/mlx5_txq.c
> > > @@ -739,7 +739,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
> > >                     MLX5_WQE_ESEG_SIZE -
> > >                     MLX5_WSEG_SIZE -
> > >                     MLX5_WSEG_SIZE +
> > > -                  MLX5_DSEG_MIN_INLINE_SIZE;
> > > +                  MLX5_ESEG_MIN_INLINE_SIZE;
> > >          return wqe_size;
> > >   }
> > >
> > > With best regards,
> > > Slava
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: Lukáš Šišmiš <sismis@cesnet.cz>
> > >> Sent: Tuesday, March 25, 2025 3:33 PM
> > >> To: users@dpdk.org; Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava
> > >> Ovsiienko <viacheslavo@nvidia.com>
> > >> Subject: Determining vendor and model from the port ID
> > >>
> > >> Hello all,
> > >>
> > >> I am trying to determine what is the vendor and model of the port ID
> > >> that I am interacting with but all references lead me to an obsolete API.
> > >>
> > >> The goal is to execute specific code only when I am dealing with
> > >> Mellanox ConnectX-4-family cards. Longer explanation below.
> > >>
> > >> I would like to access "struct rte_pci_id" but it always seems hidden
> > >> only on the driver level.
> > >>
> > >> Is there any way how to approach this?
> > >>
> > >>
> > >> Longer explanation of the problem:
> > >>
> > >> In https://github.com/OISF/suricata/pull/12654 I am using dev_info to
> > >> get the maximum number of allowed TX descriptors for the port that is
> > >> advertised by the PMD. But when I set the actual number of TX
> > >> descriptors then the driver complains "minimal data inline requirements
> > >> (18) are not satisfied (12) on port 0, try the smaller Tx queue size
> > >> (32768)". However, this problem occurs only on ConnectX-4 family and
> not
> > >> on CX5/6/7 (that's why I cannot limit this to just mlx5 PMD).
> > >>
> > >> Alternatively, can this be fixed/addressed directly in the MLX5 PMD?
> > >> MLX5 PMD needs to advertise 16384 TX descriptors as the maximum only
> > for
> > >> ConnectX-4 family.
> > >> (Putting Darius, Viacheslav in the loop, please reassign if needed)
> > >>
> > >> Thank you.
> > >>
> > >> Best,
> > >>
> > >> Lukas

  reply	other threads:[~2025-04-23 12:30 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-25 13:32 Lukáš Šišmiš
2025-03-25 14:07 ` Stephen Hemminger
2025-03-25 15:17   ` Lukáš Šišmiš
2025-03-25 15:57 ` Slava Ovsiienko
2025-03-28 12:52   ` Lukáš Šišmiš
2025-03-31 13:54     ` Slava Ovsiienko
2025-04-23 12:29       ` Slava Ovsiienko [this message]
2025-04-24  8:32         ` Lukáš Šišmiš

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DS0PR12MB8561407DB7D90A175DD57660DFBA2@DS0PR12MB8561.namprd12.prod.outlook.com \
    --to=viacheslavo@nvidia.com \
    --cc=sismis@cesnet.cz \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).