From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 328D9470A4; Sun, 21 Dec 2025 06:37:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B11EE402AF; Sun, 21 Dec 2025 06:37:30 +0100 (CET) Received: from mail-ot1-f47.google.com (mail-ot1-f47.google.com [209.85.210.47]) by mails.dpdk.org (Postfix) with ESMTP id 7038B40296 for ; Sun, 21 Dec 2025 06:37:29 +0100 (CET) Received: by mail-ot1-f47.google.com with SMTP id 46e09a7af769-7c6dbdaced8so2321217a34.1 for ; Sat, 20 Dec 2025 21:37:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766295448; x=1766900248; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=8mQ9Xco+dQqCkTkEfbbw60Fh+DG3f6THzVByixsNsWY=; b=QnUeOHGxSBgkMbeILM5CHcTzfLplcy6fwcwwQ1mPEagdHYjORIE1dz09QTnoORAXz5 x2LEddM0p6quJAkQNAFrNTexK+unmLah4cbL8wqDb7nqb7z79JQ7BxzYrxUGaJ2fBNlG UV2KAZBkcwXdJ1GtWoMZBZws50tjxhSQJxccUgM63aN2i90RaRYDeKz4hN0LqIMfUQFa 9mSjUxCTmRGp/EUUdBei/gd+ZUqNOJuNb63GV+3nzssQnEWTwyvEG2/TpvOVCxBgs+gZ BiVyOjtTuTlJHuOW2LWq3idQjCRSbvX+oDnr2RJkcaDiDFvEuS32g18VabCYpsolIn+a S7YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766295448; x=1766900248; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=8mQ9Xco+dQqCkTkEfbbw60Fh+DG3f6THzVByixsNsWY=; b=a6/rLw5IsDoh3goYjj6oN1zUpeC9WEPB03xfZh5QpnLDZZHWQZunc8WFHaTACGZAci XC3jO0NGdxeiolC/RuhOYldwTlKmC42Eg7Ftz6YIJOBFgXHHcbvIcB6GRNJRSAWgBU0I z38B8S1Gt/2k6lclZwGof5kMccn8/srI6f70mOS1litgSDynxIM2BxN5vPpnh8ksMIfo RfYJg1MZB4HziLKGk78z3MRVKBmafOoE06LG2uf3Qby2lfGDxEBukZOORBtfwJoojMKs WKBuc+5OTHmQMo5n8puKimuuc5JBZCF36mPFD6SN/S4951qE9SKZ3jHUhYUHJUACnKsr SDgg== X-Forwarded-Encrypted: i=1; AJvYcCUgjXFd4AReC9ou02RtmwzQiQ4/HME8D8Rklf2AHpkoCI1DAmyxtQkmhKy0szYN97jWv1s=@dpdk.org X-Gm-Message-State: AOJu0YxHrJkZuTiH81ZYNVxwUZiyP2VQBQO9z93T/Jz240Mwwa0Unumu erl9lQEI2KMhKMXEpR3O2H2A7NY0+u9W7OErR4bj8wGxG9Tc1krTXHUVfjEYPX9fDTH1O74Sdid hqvs6151q6JEdBqwKXFF3DefnUFUar98= X-Gm-Gg: AY/fxX6s/Zp0xD7aB9oBrFgb+oSEbm/wliwmKc5qPEeEHR8AO5gfgvEj5ruZAYoXTe5 0mqZu7uKZapEQc4qzAiOEZcIyWBNc4es8Ju9G1jHyUa/SR9ar2Rz7G6ONgju3KDfwZsgU2KeeQ+ 0VUAc+gdj8MMD+w+Ze6FPf4jVaLLhESoTkMzMm1AmEkivnYRCO3+F8xG88NuKbfZjvksfOVKmkG bqAL1ZUwr1zhvC0+11mfsYqkP9ERvJWQVr1YI8Q4al/oIos2V9PCJvGQCBNo2nHwZzdeN6cydLR 8RcuJcgtIL0zCuAsF1qjXsNAy4ORKw== X-Google-Smtp-Source: AGHT+IEcyHncRzwrqn36ShyD/YHo4VwJ8lm3RcvX8qjRqKYf8ovAavh7bF/rgyNIF56o8suklXQAvOkIoN3/KE/FJGw= X-Received: by 2002:a05:6830:3489:b0:7ca:c897:fc5c with SMTP id 46e09a7af769-7cc66a7f78dmr3872928a34.20.1766295448538; Sat, 20 Dec 2025 21:37:28 -0800 (PST) MIME-Version: 1.0 References: <20251212115238.59710-1-madhukar.mythri@gmail.com> <20251220102503.54d04c5d@phoenix.local> In-Reply-To: <20251220102503.54d04c5d@phoenix.local> From: madhukar mythri Date: Sun, 21 Dec 2025 11:07:16 +0530 X-Gm-Features: AQt7F2rRtlJJswB87FmMFewUUyZjq_PlzdoMjJbEQu--XdEPAnLF7zn3IvFYbSs Message-ID: Subject: Re: [EXTERNAL] [PATCH] net/netvsc: Fix on race condition of multiple commands To: Stephen Hemminger Cc: Long Li , dev@dpdk.org, Madhuker Mythri Content-Type: multipart/alternative; boundary="000000000000c7541006466fb42b" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000c7541006466fb42b Content-Type: text/plain; charset="UTF-8" Hi Li and Stephen, We have a common DPDK application for all the PMD's, in which we are seeing issue for this Netvsc PMD only. I mean, for KVM hypervisor with Intel or Mellanox NICs we did not see such sync issues. Also, with failsafe PMD on hyper-v did not seen such sync issues. So, i thought this would be better to fix at PMD level using spinlock. @Stephen Hemminger , yes we can store the device info get details after probe and reuse it later. For Link-status get with multiple threads we can go with retry mechanism. However, w.r.t all other PMD's this device info get and Link-status get has issues in multi threaded application. Regards, Madhuker. On Sat, 20 Dec, 2025, 23:55 Stephen Hemminger, wrote: > On Fri, 19 Dec 2025 17:35:33 +0000 > Long Li wrote: > > > > When multiple processes issue command requests(like: device info get > and > > > link-status) at same-time, then we could see the command request > failures, > > > due to race-condition of common function execution. > > > > Hi Madhuker, > > > > I'm not sure if we should use a lock in the driver for this. It's not > clear in DPDK documents but in general the calls to query device status are > not thread safe. > > > > Is it possible that the application uses a lock to sync calling to this? > > > > I do not know of any restrictions about threads calling query operations. > > For info_get() the transaction is in rndis_get_offload(). > There are couple of ways to handle this better. One would to do > the query during probe and remember the result. The hypervisor is > not going to change supported offload. The other and simpler way > would be to just have hardcoded offload values. The code for query > got compute offloads is inherited for BSD and unless someone was trying > to run on Windows 2012 or earlier version of Hyper-V it would never change. > > Link status is a little more complex. Does the hyper-visor ever report > that the software path is down? And reading through the hn_rdis_exec code > it looks like if multiple operations are in process the second one > should return -EBUSY. Application could retry in that case. > --000000000000c7541006466fb42b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Li and Stephen,

We have a common DPDK application for all the PMD's, in which we= are seeing issue for this Netvsc PMD only.
I mean, = for KVM hypervisor with Intel or Mellanox NICs we did not see such sync iss= ues. Also, with failsafe PMD on hyper-v did not seen such sync issues.

So, i thought this would be = better to fix at PMD level using spinlock.

= @Stephen Hemminger=C2=A0, yes we can sto= re the device info get details after probe and reuse it later.
=
For Link-status get with multiple threads we can go= with retry mechanism.

However, w.r.t all other PMD's this device info get and Link-stat= us get has issues in multi threaded application.
Regards,
Madhuker.

On Sat, 20 Dec, 2025, 23:55 Stephen Hemminger, <stephen@networkplumber.org>= wrote:
On Fri, 19 Dec 2025 17:35:3= 3 +0000
Long Li <longli@microsoft.com> wrote:

> > When multiple processes issue command requests(like: device info = get and
> > link-status) at same-time, then we could see the command request = failures,
> > due to race-condition of common function execution.=C2=A0
>
> Hi Madhuker,
>
> I'm not sure if we should use a lock in the driver for this. It= 9;s not clear in DPDK documents but in general the calls to query device st= atus are not thread safe.
>
> Is it possible that the application uses a lock to sync calling to thi= s?
>

I do not know of any restrictions about threads calling query operations.
For info_get() the transaction is in rndis_get_offload().
There are couple of ways to handle this better. One would to do
the query during probe and remember the result. The hypervisor is
not going to change supported offload. The other and simpler way
would be to just have hardcoded offload values. The code for query
got compute offloads is inherited for BSD and unless someone was trying
to run on Windows 2012 or earlier version of Hyper-V it would never change.=

Link status is a little more complex. Does the hyper-visor ever report
that the software path is down? And reading through the hn_rdis_exec code it looks like if multiple operations are in process the second one
should return -EBUSY. Application could retry in that case.
--000000000000c7541006466fb42b--