DPDK patches and discussions
 help / color / mirror / Atom feed
From: madhukar mythri <madhukar.mythri@gmail.com>
To: Long Li <longli@microsoft.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>,
	"dev@dpdk.org" <dev@dpdk.org>,
	 Madhuker Mythri <madhuker.mythri@oracle.com>
Subject: Re: [EXTERNAL] [PATCH] net/netvsc: Fix on race condition of multiple commands
Date: Thu, 1 Jan 2026 23:13:02 +0530	[thread overview]
Message-ID: <CAAUNki2RCAGFw3-FG6=G0ZnaHRJ_pqnN3etFJYc0cePpZF01ww@mail.gmail.com> (raw)
In-Reply-To: <EA3PR21MB5743C76495BAA587CAA14C0DCEBDA@EA3PR21MB5743.namprd21.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 3756 bytes --]

Hi Long,

Yes, got it.
By caching the results of 'hn_rndis_query_hwcaps()' inside PMD will be
better.
Yes, you can submit the patch 'Or' let me know, if you are busy.

Thanks,
Madhukar.

On Thu, Jan 1, 2026 at 4:52 AM Long Li <longli@microsoft.com> wrote:

> Hi Madhukar,
>
>
>
> I suggest caching the result of hn_rndis_query_hwcaps(), as suggested by
> Stephen. This can be done inside PMD. Do you want me to submit a patch?
>
>
>
> For querying link status, the lock should be implemented inside
> hn_rndis_exec1(). The drawback is that this function can potentially wait
> for up to 60 seconds on response from host, maybe not suitable for spinlock
> in production use. But I think it’s better to have application retry on
> BUSY (with some delay logic), as the netvsc is designed in this way since
> introduced.
>
>
>
> Thanks,
>
> Long
>
>
>
>
>
>
>
> *From:* madhukar mythri <madhukar.mythri@gmail.com>
> *Sent:* Saturday, December 20, 2025 9:37 PM
> *To:* Stephen Hemminger <stephen@networkplumber.org>
> *Cc:* Long Li <longli@microsoft.com>; dev@dpdk.org; Madhuker Mythri <
> madhuker.mythri@oracle.com>
> *Subject:* Re: [EXTERNAL] [PATCH] net/netvsc: Fix on race condition of
> multiple commands
>
>
>
> You don't often get email from madhukar.mythri@gmail.com. Learn why this
> is important <https://aka.ms/LearnAboutSenderIdentification>
>
> Hi Li and Stephen,
>
>
>
> We have a common DPDK application for all the PMD's, in which we are
> seeing issue for this Netvsc PMD only.
>
> I mean, for KVM hypervisor with Intel or Mellanox NICs we did not see such
> sync issues. Also, with failsafe PMD on hyper-v did not seen such sync
> issues.
>
>
>
> So, i thought this would be better to fix at PMD level using spinlock.
>
>
>
> @Stephen Hemminger <stephen@networkplumber.org> , yes we can store the
> device info get details after probe and reuse it later.
>
> For Link-status get with multiple threads we can go with retry mechanism.
>
>
>
> However, w.r.t all other PMD's this device info get and Link-status get
> has issues in multi threaded application.
>
>
>
> Regards,
>
> Madhuker.
>
>
>
> On Sat, 20 Dec, 2025, 23:55 Stephen Hemminger, <stephen@networkplumber.org>
> wrote:
>
> On Fri, 19 Dec 2025 17:35:33 +0000
> Long Li <longli@microsoft.com> wrote:
>
> > > When multiple processes issue command requests(like: device info get
> and
> > > link-status) at same-time, then we could see the command request
> failures,
> > > due to race-condition of common function execution.
> >
> > Hi Madhuker,
> >
> > I'm not sure if we should use a lock in the driver for this. It's not
> clear in DPDK documents but in general the calls to query device status are
> not thread safe.
> >
> > Is it possible that the application uses a lock to sync calling to this?
> >
>
> I do not know of any restrictions about threads calling query operations.
>
> For info_get() the transaction is in rndis_get_offload().
> There are couple of ways to handle this better. One would to do
> the query during probe and remember the result. The hypervisor is
> not going to change supported offload. The other and simpler way
> would be to just have hardcoded offload values. The code for query
> got compute offloads is inherited for BSD and unless someone was trying
> to run on Windows 2012 or earlier version of Hyper-V it would never change.
>
> Link status is a little more complex. Does the hyper-visor ever report
> that the software path is down? And reading through the hn_rdis_exec code
> it looks like if multiple operations are in process the second one
> should return -EBUSY. Application could retry in that case.
>
>

[-- Attachment #2: Type: text/html, Size: 8765 bytes --]

      reply	other threads:[~2026-01-01 17:43 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-12 11:52 Madhuker Mythri
2025-12-19 17:35 ` [EXTERNAL] " Long Li
2025-12-20 18:25   ` Stephen Hemminger
2025-12-21  5:37     ` madhukar mythri
2025-12-31 23:22       ` Long Li
2026-01-01 17:43         ` madhukar mythri [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAUNki2RCAGFw3-FG6=G0ZnaHRJ_pqnN3etFJYc0cePpZF01ww@mail.gmail.com' \
    --to=madhukar.mythri@gmail.com \
    --cc=dev@dpdk.org \
    --cc=longli@microsoft.com \
    --cc=madhuker.mythri@oracle.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).