DPDK usage discussions
 help / color / mirror / Atom feed
* Netvsc vs Failsafe Performance
@ 2024-09-03 21:43 Nandini Rangaswamy
  2024-09-04  0:03 ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-03 21:43 UTC (permalink / raw)
  To: Stephen Hemminger, Long Li, users

[-- Attachment #1: Type: text/plain, Size: 1347 bytes --]

Hi Stephen and Long,
I was going through one of the netvsc patches
https://mails.dpdk.org/archives/dev/2018-August/110559.html which mentioned
that netvsc and failsafe give the same performance in VF path whereas for
some exception path  tests, about 22% performance gain in seen.
I ran some tests locally with my dpdk app integrated with netvsc PMD and
observed that netvsc does give nearly the same performance as failsafe in
the VF path.
Since the official document does not explicitly cite this, I would like to
confirm if this holds good.
Regards,
Nandini

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 1567 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-03 21:43 Netvsc vs Failsafe Performance Nandini Rangaswamy
@ 2024-09-04  0:03 ` Stephen Hemminger
  2024-09-04  0:21   ` Nandini Rangaswamy
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2024-09-04  0:03 UTC (permalink / raw)
  To: Nandini Rangaswamy; +Cc: Long Li, users

On Tue, 3 Sep 2024 14:43:28 -0700
Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:

> Hi Stephen and Long,
> I was going through one of the netvsc patches
> https://mails.dpdk.org/archives/dev/2018-August/110559.html which mentioned
> that netvsc and failsafe give the same performance in VF path whereas for
> some exception path  tests, about 22% performance gain in seen.
> I ran some tests locally with my dpdk app integrated with netvsc PMD and
> observed that netvsc does give nearly the same performance as failsafe in
> the VF path.
> Since the official document does not explicitly cite this, I would like to
> confirm if this holds good.
> Regards,
> Nandini
> 

Shouldn't be. What settings are you using.
Both failsafe and netvsc just pass packets to VF if present.
There is even more locks to go through with failsafe.

Are you sure the test doesn't exercise something like checksumming which
maybe different.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-04  0:03 ` Stephen Hemminger
@ 2024-09-04  0:21   ` Nandini Rangaswamy
  2024-09-04 22:42     ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-04  0:21 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Long Li, users

[-- Attachment #1: Type: text/plain, Size: 2977 bytes --]

Hi Stephen/Long,
dpdk_netvsc_port_configure:1873 Configure port eth2/2. I am testing using
TCP traffic (iperf3 tool) generated between pair of client and servers with
DPDK app forward traffic between client and servers.
These are the config being passed for configuring netvsc:
lsc_intr=1
rxq/txq=2/2,
rss is enabled with rss_hf=0x0000000000000c30
tx_ol=0x00000000000006
rx_ol=0x00000000080007

Rsskey len is 64.
struct rte_eth_conf conf = {
.intr_conf = {
.lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
!!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
},
.rxmode = {
.mq_mode = RTE_ETH_MQ_RX_RSS,
.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
},
.rx_adv_conf.rss_conf = {
.rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |
RTE_ETH_RSS_NONFRAG_IPV6_TCP,
.rss_key = conf_rss_key,
.rss_key_len = rss_key_len,
},
.txmode = {
.offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
},

Regards,
Nandini

On Tue, Sep 3, 2024 at 5:03 PM Stephen Hemminger <stephen@networkplumber.org>
wrote:

> On Tue, 3 Sep 2024 14:43:28 -0700
> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
>
> > Hi Stephen and Long,
> > I was going through one of the netvsc patches
> > https://mails.dpdk.org/archives/dev/2018-August/110559.html which
> mentioned
> > that netvsc and failsafe give the same performance in VF path whereas for
> > some exception path  tests, about 22% performance gain in seen.
> > I ran some tests locally with my dpdk app integrated with netvsc PMD and
> > observed that netvsc does give nearly the same performance as failsafe in
> > the VF path.
> > Since the official document does not explicitly cite this, I would like
> to
> > confirm if this holds good.
> > Regards,
> > Nandini
> >
>
> Shouldn't be. What settings are you using.
> Both failsafe and netvsc just pass packets to VF if present.
> There is even more locks to go through with failsafe.
>
> Are you sure the test doesn't exercise something like checksumming which
> maybe different.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 6812 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-04  0:21   ` Nandini Rangaswamy
@ 2024-09-04 22:42     ` Stephen Hemminger
  2024-09-05  2:30       ` Long Li
  2024-09-12 22:02       ` Nandini Rangaswamy
  0 siblings, 2 replies; 14+ messages in thread
From: Stephen Hemminger @ 2024-09-04 22:42 UTC (permalink / raw)
  To: Nandini Rangaswamy; +Cc: Long Li, users

On Tue, 3 Sep 2024 17:21:48 -0700
Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:

> Hi Stephen/Long,
> dpdk_netvsc_port_configure:1873 Configure port eth2/2. I am testing using
> TCP traffic (iperf3 tool) generated between pair of client and servers with
> DPDK app forward traffic between client and servers.
> These are the config being passed for configuring netvsc:
> lsc_intr=1
> rxq/txq=2/2,
> rss is enabled with rss_hf=0x0000000000000c30
> tx_ol=0x00000000000006
> rx_ol=0x00000000080007
> 
> Rsskey len is 64.
> struct rte_eth_conf conf = {
> .intr_conf = {
> .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
> !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
> },
> .rxmode = {
> .mq_mode = RTE_ETH_MQ_RX_RSS,
> .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
> },
> .rx_adv_conf.rss_conf = {
> .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> RTE_ETH_RSS_NONFRAG_IPV6_TCP,
> .rss_key = conf_rss_key,
> .rss_key_len = rss_key_len,
> },
> .txmode = {
> .offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
> },
> 
> Regards,
> Nandini
> 
> On Tue, Sep 3, 2024 at 5:03 PM Stephen Hemminger <stephen@networkplumber.org>
> wrote:
> 
> > On Tue, 3 Sep 2024 14:43:28 -0700
> > Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> >  
> > > Hi Stephen and Long,
> > > I was going through one of the netvsc patches
> > > https://mails.dpdk.org/archives/dev/2018-August/110559.html which  
> > mentioned  
> > > that netvsc and failsafe give the same performance in VF path whereas for
> > > some exception path  tests, about 22% performance gain in seen.
> > > I ran some tests locally with my dpdk app integrated with netvsc PMD and
> > > observed that netvsc does give nearly the same performance as failsafe in
> > > the VF path.
> > > Since the official document does not explicitly cite this, I would like  
> > to  
> > > confirm if this holds good.
> > > Regards,
> > > Nandini
> > >  
> >
> > Shouldn't be. What settings are you using.
> > Both failsafe and netvsc just pass packets to VF if present.
> > There is even more locks to go through with failsafe.
> >
> > Are you sure the test doesn't exercise something like checksumming which
> > maybe different.
> >  
> 

How many streams? RSS won't matter unless multiple streams.
The netvsc driver does not have RSS for UDP as a listed flag.
It turns out that for that version of NDIS, if you ask for TCP RSS, UDP RSS is implied.

RSS Key must be 40 bytes (Toeplitz) not 64 bytes.
Just use the default key (rss_key == NULL rss_key_len = 0) to be safe

Check that packets are going to the VF. One way to do that is to look at xstats on both netvsc and mlx5 device.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: Netvsc vs Failsafe Performance
  2024-09-04 22:42     ` Stephen Hemminger
@ 2024-09-05  2:30       ` Long Li
  2024-09-12 20:47         ` Nandini Rangaswamy
  2024-09-12 22:02       ` Nandini Rangaswamy
  1 sibling, 1 reply; 14+ messages in thread
From: Long Li @ 2024-09-05  2:30 UTC (permalink / raw)
  To: Stephen Hemminger, Nandini Rangaswamy; +Cc: users

> Subject: Re: Netvsc vs Failsafe Performance
> 
> On Tue, 3 Sep 2024 17:21:48 -0700
> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> 
> > Hi Stephen/Long,
> > dpdk_netvsc_port_configure:1873 Configure port eth2/2. I am testing
> > using TCP traffic (iperf3 tool) generated between pair of client and
> > servers with DPDK app forward traffic between client and servers.
> > These are the config being passed for configuring netvsc:
> > lsc_intr=1
> > rxq/txq=2/2,
> > rss is enabled with rss_hf=0x0000000000000c30
> > tx_ol=0x00000000000006
> > rx_ol=0x00000000080007
> >
> > Rsskey len is 64.
> > struct rte_eth_conf conf = {
> > .intr_conf = {
> > .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
> > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC), }, .rxmode = {
> > .mq_mode = RTE_ETH_MQ_RX_RSS, .offloads =
> > RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> > RTE_ETH_RX_OFFLOAD_RSS_HASH |
> RTE_ETH_RX_OFFLOAD_UDP_CKSUM, },
> > .rx_adv_conf.rss_conf = { .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
> > RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> RTE_ETH_RSS_NONFRAG_IPV6_TCP, .rss_key
> > = conf_rss_key, .rss_key_len = rss_key_len, }, .txmode = { .offloads =
> > RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
> RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, },
> >
> > Regards,
> > Nandini
> >
> > On Tue, Sep 3, 2024 at 5:03 PM Stephen Hemminger
> > <stephen@networkplumber.org>
> > wrote:
> >
> > > On Tue, 3 Sep 2024 14:43:28 -0700
> > > Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> > >
> > > > Hi Stephen and Long,
> > > > I was going through one of the netvsc patches
> > > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> > > > mails.dpdk.org%2Farchives%2Fdev%2F2018-
> August%2F110559.html&data=0
> > > >
> 5%7C02%7Clongli%40microsoft.com%7Ce91cca1ee99f4809138708dccd32e76
> 7
> > > > %7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638610865749
> 361006%7
> > > >
> CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB
> TiI
> > > >
> 6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=ms6yhMjBpT0IFu6e9
> wmh4K1
> > > > WDINzgjoRzFJmMJGJwuY%3D&reserved=0 which
> > > mentioned
> > > > that netvsc and failsafe give the same performance in VF path
> > > > whereas for some exception path  tests, about 22% performance gain in
> seen.
> > > > I ran some tests locally with my dpdk app integrated with netvsc
> > > > PMD and observed that netvsc does give nearly the same performance
> > > > as failsafe in the VF path.
> > > > Since the official document does not explicitly cite this, I would
> > > > like
> > > to
> > > > confirm if this holds good.
> > > > Regards,
> > > > Nandini
> > > >
> > >
> > > Shouldn't be. What settings are you using.
> > > Both failsafe and netvsc just pass packets to VF if present.
> > > There is even more locks to go through with failsafe.
> > >
> > > Are you sure the test doesn't exercise something like checksumming
> > > which maybe different.
> > >
> >
> 
> How many streams? RSS won't matter unless multiple streams.
> The netvsc driver does not have RSS for UDP as a listed flag.
> It turns out that for that version of NDIS, if you ask for TCP RSS, UDP RSS is
> implied.
> 
> RSS Key must be 40 bytes (Toeplitz) not 64 bytes.
> Just use the default key (rss_key == NULL rss_key_len = 0) to be safe
> 
> Check that packets are going to the VF. One way to do that is to look at xstats
> on both netvsc and mlx5 device.
> 

If most traffic goes through the VF, you won't see much difference in performance of netvsc vs failsafe because they are not used on the data path.

It seems the 20% performance gain is measured on synthetic path, meaning those traffic does not go through the VF. In this scenario, netvsc has an advantage over failsafe since the traffic data doesn't need to be copied around in the kernel space.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-05  2:30       ` Long Li
@ 2024-09-12 20:47         ` Nandini Rangaswamy
  2024-09-12 23:09           ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-12 20:47 UTC (permalink / raw)
  To: Long Li; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 5357 bytes --]

Thanks for your response Long Li.
I see with netvsc the maximum number of Tx descriptors is restricted to
4096 whereas the number of Rx descriptors is restricted to 8192.
But, for failsafe PMD , we see that both the number of Txd and Rxd is
restricted to 8192.
How is netvsc PMD giving the same performance as failsafe PMD ?

Regards

On Wed, Sep 4, 2024 at 7:30 PM Long Li <longli@microsoft.com> wrote:

> > Subject: Re: Netvsc vs Failsafe Performance
> >
> > On Tue, 3 Sep 2024 17:21:48 -0700
> > Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> >
> > > Hi Stephen/Long,
> > > dpdk_netvsc_port_configure:1873 Configure port eth2/2. I am testing
> > > using TCP traffic (iperf3 tool) generated between pair of client and
> > > servers with DPDK app forward traffic between client and servers.
> > > These are the config being passed for configuring netvsc:
> > > lsc_intr=1
> > > rxq/txq=2/2,
> > > rss is enabled with rss_hf=0x0000000000000c30
> > > tx_ol=0x00000000000006
> > > rx_ol=0x00000000080007
> > >
> > > Rsskey len is 64.
> > > struct rte_eth_conf conf = {
> > > .intr_conf = {
> > > .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
> > > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC), }, .rxmode = {
> > > .mq_mode = RTE_ETH_MQ_RX_RSS, .offloads =
> > > RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> > RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> > > RTE_ETH_RX_OFFLOAD_RSS_HASH |
> > RTE_ETH_RX_OFFLOAD_UDP_CKSUM, },
> > > .rx_adv_conf.rss_conf = { .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
> > > RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> > RTE_ETH_RSS_NONFRAG_IPV6_TCP, .rss_key
> > > = conf_rss_key, .rss_key_len = rss_key_len, }, .txmode = { .offloads =
> > > RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
> > RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, },
> > >
> > > Regards,
> > > Nandini
> > >
> > > On Tue, Sep 3, 2024 at 5:03 PM Stephen Hemminger
> > > <stephen@networkplumber.org>
> > > wrote:
> > >
> > > > On Tue, 3 Sep 2024 14:43:28 -0700
> > > > Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> > > >
> > > > > Hi Stephen and Long,
> > > > > I was going through one of the netvsc patches
> > > > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> > > > > mails.dpdk.org%2Farchives%2Fdev%2F2018-
> > August%2F110559.html&data=0
> > > > >
> > 5%7C02%7Clongli%40microsoft.com%7Ce91cca1ee99f4809138708dccd32e76
> > 7
> > > > > %7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638610865749
> > 361006%7
> > > > >
> > CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB
> > TiI
> > > > >
> > 6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=ms6yhMjBpT0IFu6e9
> > wmh4K1
> > > > > WDINzgjoRzFJmMJGJwuY%3D&reserved=0 which
> > > > mentioned
> > > > > that netvsc and failsafe give the same performance in VF path
> > > > > whereas for some exception path  tests, about 22% performance gain
> in
> > seen.
> > > > > I ran some tests locally with my dpdk app integrated with netvsc
> > > > > PMD and observed that netvsc does give nearly the same performance
> > > > > as failsafe in the VF path.
> > > > > Since the official document does not explicitly cite this, I would
> > > > > like
> > > > to
> > > > > confirm if this holds good.
> > > > > Regards,
> > > > > Nandini
> > > > >
> > > >
> > > > Shouldn't be. What settings are you using.
> > > > Both failsafe and netvsc just pass packets to VF if present.
> > > > There is even more locks to go through with failsafe.
> > > >
> > > > Are you sure the test doesn't exercise something like checksumming
> > > > which maybe different.
> > > >
> > >
> >
> > How many streams? RSS won't matter unless multiple streams.
> > The netvsc driver does not have RSS for UDP as a listed flag.
> > It turns out that for that version of NDIS, if you ask for TCP RSS, UDP
> RSS is
> > implied.
> >
> > RSS Key must be 40 bytes (Toeplitz) not 64 bytes.
> > Just use the default key (rss_key == NULL rss_key_len = 0) to be safe
> >
> > Check that packets are going to the VF. One way to do that is to look at
> xstats
> > on both netvsc and mlx5 device.
> >
>
> If most traffic goes through the VF, you won't see much difference in
> performance of netvsc vs failsafe because they are not used on the data
> path.
>
> It seems the 20% performance gain is measured on synthetic path, meaning
> those traffic does not go through the VF. In this scenario, netvsc has an
> advantage over failsafe since the traffic data doesn't need to be copied
> around in the kernel space.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 7107 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-04 22:42     ` Stephen Hemminger
  2024-09-05  2:30       ` Long Li
@ 2024-09-12 22:02       ` Nandini Rangaswamy
  2024-09-12 22:59         ` Stephen Hemminger
  1 sibling, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-12 22:02 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Long Li, users

[-- Attachment #1: Type: text/plain, Size: 4211 bytes --]

Hi Stephen/Long,
Thanks for the suggestions. I configured the RSS key to be 40 bytes as
suggested.
What NDIS version is being used by netvsc and could you please direct me to
a microsoft document confirming that if TCP RSS is requested, UDP RSS is
implied?
Regards

On Wed, Sep 4, 2024 at 3:42 PM Stephen Hemminger <stephen@networkplumber.org>
wrote:

> On Tue, 3 Sep 2024 17:21:48 -0700
> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
>
> > Hi Stephen/Long,
> > dpdk_netvsc_port_configure:1873 Configure port eth2/2. I am testing using
> > TCP traffic (iperf3 tool) generated between pair of client and servers
> with
> > DPDK app forward traffic between client and servers.
> > These are the config being passed for configuring netvsc:
> > lsc_intr=1
> > rxq/txq=2/2,
> > rss is enabled with rss_hf=0x0000000000000c30
> > tx_ol=0x00000000000006
> > rx_ol=0x00000000080007
> >
> > Rsskey len is 64.
> > struct rte_eth_conf conf = {
> > .intr_conf = {
> > .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
> > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
> > },
> > .rxmode = {
> > .mq_mode = RTE_ETH_MQ_RX_RSS,
> > .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> > RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
> > },
> > .rx_adv_conf.rss_conf = {
> > .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> > RTE_ETH_RSS_NONFRAG_IPV6_TCP,
> > .rss_key = conf_rss_key,
> > .rss_key_len = rss_key_len,
> > },
> > .txmode = {
> > .offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
> > },
> >
> > Regards,
> > Nandini
> >
> > On Tue, Sep 3, 2024 at 5:03 PM Stephen Hemminger <
> stephen@networkplumber.org>
> > wrote:
> >
> > > On Tue, 3 Sep 2024 14:43:28 -0700
> > > Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> > >
> > > > Hi Stephen and Long,
> > > > I was going through one of the netvsc patches
> > > > https://mails.dpdk.org/archives/dev/2018-August/110559.html which
> > > mentioned
> > > > that netvsc and failsafe give the same performance in VF path
> whereas for
> > > > some exception path  tests, about 22% performance gain in seen.
> > > > I ran some tests locally with my dpdk app integrated with netvsc PMD
> and
> > > > observed that netvsc does give nearly the same performance as
> failsafe in
> > > > the VF path.
> > > > Since the official document does not explicitly cite this, I would
> like
> > > to
> > > > confirm if this holds good.
> > > > Regards,
> > > > Nandini
> > > >
> > >
> > > Shouldn't be. What settings are you using.
> > > Both failsafe and netvsc just pass packets to VF if present.
> > > There is even more locks to go through with failsafe.
> > >
> > > Are you sure the test doesn't exercise something like checksumming
> which
> > > maybe different.
> > >
> >
>
> How many streams? RSS won't matter unless multiple streams.
> The netvsc driver does not have RSS for UDP as a listed flag.
> It turns out that for that version of NDIS, if you ask for TCP RSS, UDP
> RSS is implied.
>
> RSS Key must be 40 bytes (Toeplitz) not 64 bytes.
> Just use the default key (rss_key == NULL rss_key_len = 0) to be safe
>
> Check that packets are going to the VF. One way to do that is to look at
> xstats on both netvsc and mlx5 device.
>
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 5429 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-12 22:02       ` Nandini Rangaswamy
@ 2024-09-12 22:59         ` Stephen Hemminger
  0 siblings, 0 replies; 14+ messages in thread
From: Stephen Hemminger @ 2024-09-12 22:59 UTC (permalink / raw)
  To: Nandini Rangaswamy; +Cc: Long Li, users

On Thu, 12 Sep 2024 15:02:04 -0700
Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:

> Hi Stephen/Long,
> Thanks for the suggestions. I configured the RSS key to be 40 bytes as
> suggested.
> What NDIS version is being used by netvsc and could you please direct me to
> a microsoft document confirming that if TCP RSS is requested, UDP RSS is
> implied?
> Regards

Well the netvsc driver for FreeBSD (on which the DPDK one is based) and Linux
only uses the flag for TCP RSS and UDP works fine.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-12 20:47         ` Nandini Rangaswamy
@ 2024-09-12 23:09           ` Stephen Hemminger
  2024-09-13 17:56             ` Nandini Rangaswamy
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2024-09-12 23:09 UTC (permalink / raw)
  To: Nandini Rangaswamy; +Cc: Long Li, users

On Thu, 12 Sep 2024 13:47:37 -0700
Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:

> Thanks for your response Long Li.
> I see with netvsc the maximum number of Tx descriptors is restricted to
> 4096 whereas the number of Rx descriptors is restricted to 8192.
> But, for failsafe PMD , we see that both the number of Txd and Rxd is
> restricted to 8192.
> How is netvsc PMD giving the same performance as failsafe PMD ?
> 
> Regards

I think the limits there were somewhat arbitrary chose with netvsc.
Don't remember a hard reason that would block larger sizes.


Having really big rings won't help performance (i.e BufferBloat) and
could a lot of memory consumption. When all heavy data traffic goes through
the VF and that ring is different. Only DoS attacks should be impacted
by rx/tx descriptor limits in the netvsc device. The linux driver actually
has much smaller buffer.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-12 23:09           ` Stephen Hemminger
@ 2024-09-13 17:56             ` Nandini Rangaswamy
  2024-09-13 21:27               ` Long Li
  0 siblings, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-13 17:56 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Long Li, users

[-- Attachment #1: Type: text/plain, Size: 3058 bytes --]

Thanks for clarifying the question regarding Txd size Stephen.
I tested out the RSS for TCP UDP.
As suggested , I set the TCP flags alone in RSS conf and configured the
netvsc port.

struct rte_eth_conf conf = {
.intr_conf = {
.lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
!!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
},
.rxmode = {
.mq_mode = RTE_ETH_MQ_RX_RSS,
.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
},
.rx_adv_conf.rss_conf = {
.rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
.rss_key = conf_rss_key,
.rss_key_len = rss_key_len,
},
.txmode = {
.offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
},
};
rte_eth_dev_configure(<netvsc port>, num_rxq,num_txq, &conf);
uint8_t rss_key_temp[64];
struct rte_eth_rss_conf rss_conf = {
.rss_key = rss_key_temp,
.rss_key_len = sizeof(rss_key_temp),
};
ret = rte_eth_dev_rss_hash_conf_get(<VF port>, &rss_conf);


Now the VF port RSS offloads show only TCP flags set and not UDP. I assumed
that even the UDP flags might be set. Is this expected ?

Regards,
Nandini


On Thu, Sep 12, 2024 at 4:09 PM Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Thu, 12 Sep 2024 13:47:37 -0700
> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
>
> > Thanks for your response Long Li.
> > I see with netvsc the maximum number of Tx descriptors is restricted to
> > 4096 whereas the number of Rx descriptors is restricted to 8192.
> > But, for failsafe PMD , we see that both the number of Txd and Rxd is
> > restricted to 8192.
> > How is netvsc PMD giving the same performance as failsafe PMD ?
> >
> > Regards
>
> I think the limits there were somewhat arbitrary chose with netvsc.
> Don't remember a hard reason that would block larger sizes.
>
>
> Having really big rings won't help performance (i.e BufferBloat) and
> could a lot of memory consumption. When all heavy data traffic goes through
> the VF and that ring is different. Only DoS attacks should be impacted
> by rx/tx descriptor limits in the netvsc device. The linux driver actually
> has much smaller buffer.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 8929 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: Netvsc vs Failsafe Performance
  2024-09-13 17:56             ` Nandini Rangaswamy
@ 2024-09-13 21:27               ` Long Li
  2024-09-13 21:29                 ` Nandini Rangaswamy
  0 siblings, 1 reply; 14+ messages in thread
From: Long Li @ 2024-09-13 21:27 UTC (permalink / raw)
  To: Nandini Rangaswamy, Stephen Hemminger; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 4096 bytes --]

It’s a bug in netvsc for not reporting RTE_ETH_RSS_NONFRAG_IPV6_UDP. It is implied as in the case in IPV4.

Can you try the following patch?

diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 1ba75ee804..fe1f04d8d9 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -717,6 +717,7 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
        if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
                hv->rss_offloads |= RTE_ETH_RSS_IPV6
                        | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
+                       | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
        if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
                hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
                        | RTE_ETH_RSS_IPV6_TCP_EX;


From: Nandini Rangaswamy <nandini.rangaswamy@broadcom.com>
Sent: Friday, September 13, 2024 10:56 AM
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Long Li <longli@microsoft.com>; users@dpdk.org
Subject: Re: Netvsc vs Failsafe Performance

Thanks for clarifying the question regarding Txd size Stephen.
I tested out the RSS for TCP UDP.
As suggested , I set the TCP flags alone in RSS conf and configured the netvsc port.

struct rte_eth_conf conf = {
.intr_conf = {
.lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
!!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
},
.rxmode = {
.mq_mode = RTE_ETH_MQ_RX_RSS,
.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
},
.rx_adv_conf.rss_conf = {
.rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
.rss_key = conf_rss_key,
.rss_key_len = rss_key_len,
},
.txmode = {
.offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
},
};
rte_eth_dev_configure(<netvsc port>, num_rxq,num_txq, &conf);
uint8_t rss_key_temp[64];
struct rte_eth_rss_conf rss_conf = {
.rss_key = rss_key_temp,
.rss_key_len = sizeof(rss_key_temp),
};
ret = rte_eth_dev_rss_hash_conf_get(<VF port>, &rss_conf);


Now the VF port RSS offloads show only TCP flags set and not UDP. I assumed that even the UDP flags might be set. Is this expected ?

Regards,
Nandini


On Thu, Sep 12, 2024 at 4:09 PM Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>> wrote:
On Thu, 12 Sep 2024 13:47:37 -0700
Nandini Rangaswamy <nandini.rangaswamy@broadcom.com<mailto:nandini.rangaswamy@broadcom.com>> wrote:

> Thanks for your response Long Li.
> I see with netvsc the maximum number of Tx descriptors is restricted to
> 4096 whereas the number of Rx descriptors is restricted to 8192.
> But, for failsafe PMD , we see that both the number of Txd and Rxd is
> restricted to 8192.
> How is netvsc PMD giving the same performance as failsafe PMD ?
>
> Regards

I think the limits there were somewhat arbitrary chose with netvsc.
Don't remember a hard reason that would block larger sizes.


Having really big rings won't help performance (i.e BufferBloat) and
could a lot of memory consumption. When all heavy data traffic goes through
the VF and that ring is different. Only DoS attacks should be impacted
by rx/tx descriptor limits in the netvsc device. The linux driver actually
has much smaller buffer.

This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 26405 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-13 21:27               ` Long Li
@ 2024-09-13 21:29                 ` Nandini Rangaswamy
  2024-09-16 22:58                   ` Nandini Rangaswamy
  0 siblings, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-13 21:29 UTC (permalink / raw)
  To: Long Li; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 5358 bytes --]

Thanks Long Li.
I shall try this patch and get back to you.

On Fri, Sep 13, 2024 at 2:27 PM Long Li <longli@microsoft.com> wrote:

> It’s a bug in netvsc for not reporting RTE_ETH_RSS_NONFRAG_IPV6_UDP. It is
> implied as in the case in IPV4.
>
>
>
> Can you try the following patch?
>
>
>
> diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
>
> index 1ba75ee804..fe1f04d8d9 100644
>
> --- a/drivers/net/netvsc/hn_rndis.c
>
> +++ b/drivers/net/netvsc/hn_rndis.c
>
> @@ -717,6 +717,7 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
>
>         if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
>
>                 hv->rss_offloads |= RTE_ETH_RSS_IPV6
>
>                         | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
>
> +                       | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
>
>         if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
>
>                 hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
>
>                         | RTE_ETH_RSS_IPV6_TCP_EX;
>
>
>
>
>
> *From:* Nandini Rangaswamy <nandini.rangaswamy@broadcom.com>
> *Sent:* Friday, September 13, 2024 10:56 AM
> *To:* Stephen Hemminger <stephen@networkplumber.org>
> *Cc:* Long Li <longli@microsoft.com>; users@dpdk.org
> *Subject:* Re: Netvsc vs Failsafe Performance
>
>
>
> Thanks for clarifying the question regarding Txd size Stephen.
>
> I tested out the RSS for TCP UDP.
>
> As suggested , I set the TCP flags alone in RSS conf and configured the
> netvsc port.
>
>
>
> struct rte_eth_conf conf = {
>
> .intr_conf = {
>
> .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
>
> !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
>
> },
>
> .rxmode = {
>
> .mq_mode = RTE_ETH_MQ_RX_RSS,
>
> .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
> |
>
> RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
>
> },
>
> .rx_adv_conf.rss_conf = {
>
> .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
>
> .rss_key = conf_rss_key,
>
> .rss_key_len = rss_key_len,
>
> },
>
> .txmode = {
>
> .offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
>
> },
>
> };
>
> *rte_eth_dev_configure*(<netvsc port>, num_rxq,num_txq, &conf);
>
> uint8_t rss_key_temp[64];
>
> struct rte_eth_rss_conf rss_conf = {
>
> .rss_key = rss_key_temp,
>
> .rss_key_len = sizeof(rss_key_temp),
>
> };
>
> ret = *rte_eth_dev_rss_hash_conf_get*(<VF port>, &rss_conf);
>
>
>
>
>
> Now the VF port RSS offloads show only TCP flags set and not UDP. I
> assumed that even the UDP flags might be set. Is this expected ?
>
>
>
> Regards,
>
> Nandini
>
>
>
>
>
> On Thu, Sep 12, 2024 at 4:09 PM Stephen Hemminger <
> stephen@networkplumber.org> wrote:
>
> On Thu, 12 Sep 2024 13:47:37 -0700
> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
>
> > Thanks for your response Long Li.
> > I see with netvsc the maximum number of Tx descriptors is restricted to
> > 4096 whereas the number of Rx descriptors is restricted to 8192.
> > But, for failsafe PMD , we see that both the number of Txd and Rxd is
> > restricted to 8192.
> > How is netvsc PMD giving the same performance as failsafe PMD ?
> >
> > Regards
>
> I think the limits there were somewhat arbitrary chose with netvsc.
> Don't remember a hard reason that would block larger sizes.
>
>
> Having really big rings won't help performance (i.e BufferBloat) and
> could a lot of memory consumption. When all heavy data traffic goes through
> the VF and that ring is different. Only DoS attacks should be impacted
> by rx/tx descriptor limits in the netvsc device. The linux driver actually
> has much smaller buffer.
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 27087 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Netvsc vs Failsafe Performance
  2024-09-13 21:29                 ` Nandini Rangaswamy
@ 2024-09-16 22:58                   ` Nandini Rangaswamy
  2024-09-17 21:56                     ` Long Li
  0 siblings, 1 reply; 14+ messages in thread
From: Nandini Rangaswamy @ 2024-09-16 22:58 UTC (permalink / raw)
  To: Long Li; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 5816 bytes --]

Hi Long,
I tested this patch and it works as expected. The UDP IPv6 RSS offload bit
is set and my dpdk app is able to successfully configure the netvsc port.
Regards,
Nandini

On Fri, Sep 13, 2024 at 2:29 PM Nandini Rangaswamy <
nandini.rangaswamy@broadcom.com> wrote:

> Thanks Long Li.
> I shall try this patch and get back to you.
>
> On Fri, Sep 13, 2024 at 2:27 PM Long Li <longli@microsoft.com> wrote:
>
>> It’s a bug in netvsc for not reporting RTE_ETH_RSS_NONFRAG_IPV6_UDP. It
>> is implied as in the case in IPV4.
>>
>>
>>
>> Can you try the following patch?
>>
>>
>>
>> diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
>>
>> index 1ba75ee804..fe1f04d8d9 100644
>>
>> --- a/drivers/net/netvsc/hn_rndis.c
>>
>> +++ b/drivers/net/netvsc/hn_rndis.c
>>
>> @@ -717,6 +717,7 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
>>
>>         if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
>>
>>                 hv->rss_offloads |= RTE_ETH_RSS_IPV6
>>
>>                         | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
>>
>> +                       | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
>>
>>         if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
>>
>>                 hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
>>
>>                         | RTE_ETH_RSS_IPV6_TCP_EX;
>>
>>
>>
>>
>>
>> *From:* Nandini Rangaswamy <nandini.rangaswamy@broadcom.com>
>> *Sent:* Friday, September 13, 2024 10:56 AM
>> *To:* Stephen Hemminger <stephen@networkplumber.org>
>> *Cc:* Long Li <longli@microsoft.com>; users@dpdk.org
>> *Subject:* Re: Netvsc vs Failsafe Performance
>>
>>
>>
>> Thanks for clarifying the question regarding Txd size Stephen.
>>
>> I tested out the RSS for TCP UDP.
>>
>> As suggested , I set the TCP flags alone in RSS conf and configured the
>> netvsc port.
>>
>>
>>
>> struct rte_eth_conf conf = {
>>
>> .intr_conf = {
>>
>> .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
>>
>> !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
>>
>> },
>>
>> .rxmode = {
>>
>> .mq_mode = RTE_ETH_MQ_RX_RSS,
>>
>> .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
>> RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
>>
>> RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
>>
>> },
>>
>> .rx_adv_conf.rss_conf = {
>>
>> .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
>>
>> .rss_key = conf_rss_key,
>>
>> .rss_key_len = rss_key_len,
>>
>> },
>>
>> .txmode = {
>>
>> .offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
>>
>> },
>>
>> };
>>
>> *rte_eth_dev_configure*(<netvsc port>, num_rxq,num_txq, &conf);
>>
>> uint8_t rss_key_temp[64];
>>
>> struct rte_eth_rss_conf rss_conf = {
>>
>> .rss_key = rss_key_temp,
>>
>> .rss_key_len = sizeof(rss_key_temp),
>>
>> };
>>
>> ret = *rte_eth_dev_rss_hash_conf_get*(<VF port>, &rss_conf);
>>
>>
>>
>>
>>
>> Now the VF port RSS offloads show only TCP flags set and not UDP. I
>> assumed that even the UDP flags might be set. Is this expected ?
>>
>>
>>
>> Regards,
>>
>> Nandini
>>
>>
>>
>>
>>
>> On Thu, Sep 12, 2024 at 4:09 PM Stephen Hemminger <
>> stephen@networkplumber.org> wrote:
>>
>> On Thu, 12 Sep 2024 13:47:37 -0700
>> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
>>
>> > Thanks for your response Long Li.
>> > I see with netvsc the maximum number of Tx descriptors is restricted to
>> > 4096 whereas the number of Rx descriptors is restricted to 8192.
>> > But, for failsafe PMD , we see that both the number of Txd and Rxd is
>> > restricted to 8192.
>> > How is netvsc PMD giving the same performance as failsafe PMD ?
>> >
>> > Regards
>>
>> I think the limits there were somewhat arbitrary chose with netvsc.
>> Don't remember a hard reason that would block larger sizes.
>>
>>
>> Having really big rings won't help performance (i.e BufferBloat) and
>> could a lot of memory consumption. When all heavy data traffic goes
>> through
>> the VF and that ring is different. Only DoS attacks should be impacted
>> by rx/tx descriptor limits in the netvsc device. The linux driver actually
>> has much smaller buffer.
>>
>>
>> This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of the individual or entity to whom it is addressed and
>> may contain information that is confidential, legally privileged, protected
>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>> you are not the intended recipient or the person responsible for delivering
>> the e-mail to the intended recipient, you are hereby notified that any use,
>> copying, distributing, dissemination, forwarding, printing, or copying of
>> this e-mail is strictly prohibited. If you received this e-mail in error,
>> please return the e-mail to the sender, delete it from your computer, and
>> destroy any printed copy of it.
>>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 27586 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: Netvsc vs Failsafe Performance
  2024-09-16 22:58                   ` Nandini Rangaswamy
@ 2024-09-17 21:56                     ` Long Li
  0 siblings, 0 replies; 14+ messages in thread
From: Long Li @ 2024-09-17 21:56 UTC (permalink / raw)
  To: Nandini Rangaswamy; +Cc: Stephen Hemminger, users

[-- Attachment #1: Type: text/plain, Size: 5876 bytes --]

Thank you!

Are you seeing problems with UDP traffic on the receive side? If everything works fine for you, I’m sending a patch.

Long

From: Nandini Rangaswamy <nandini.rangaswamy@broadcom.com>
Sent: Monday, September 16, 2024 3:58 PM
To: Long Li <longli@microsoft.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; users@dpdk.org
Subject: Re: Netvsc vs Failsafe Performance

Hi Long,
I tested this patch and it works as expected. The UDP IPv6 RSS offload bit is set and my dpdk app is able to successfully configure the netvsc port.
Regards,
Nandini

On Fri, Sep 13, 2024 at 2:29 PM Nandini Rangaswamy <nandini.rangaswamy@broadcom.com<mailto:nandini.rangaswamy@broadcom.com>> wrote:
Thanks Long Li.
I shall try this patch and get back to you.

On Fri, Sep 13, 2024 at 2:27 PM Long Li <longli@microsoft.com<mailto:longli@microsoft.com>> wrote:
It’s a bug in netvsc for not reporting RTE_ETH_RSS_NONFRAG_IPV6_UDP. It is implied as in the case in IPV4.

Can you try the following patch?

diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index 1ba75ee804..fe1f04d8d9 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -717,6 +717,7 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
        if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
                hv->rss_offloads |= RTE_ETH_RSS_IPV6
                        | RTE_ETH_RSS_NONFRAG_IPV6_TCP;
+                       | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
        if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
                hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
                        | RTE_ETH_RSS_IPV6_TCP_EX;


From: Nandini Rangaswamy <nandini.rangaswamy@broadcom.com<mailto:nandini.rangaswamy@broadcom.com>>
Sent: Friday, September 13, 2024 10:56 AM
To: Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>>
Cc: Long Li <longli@microsoft.com<mailto:longli@microsoft.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: Netvsc vs Failsafe Performance

Thanks for clarifying the question regarding Txd size Stephen.
I tested out the RSS for TCP UDP.
As suggested , I set the TCP flags alone in RSS conf and configured the netvsc port.

struct rte_eth_conf conf = {
.intr_conf = {
.lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
!!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC),
},
.rxmode = {
.mq_mode = RTE_ETH_MQ_RX_RSS,
.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_UDP_CKSUM,
},
.rx_adv_conf.rss_conf = {
.rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
.rss_key = conf_rss_key,
.rss_key_len = rss_key_len,
},
.txmode = {
.offloads = RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
},
};
rte_eth_dev_configure(<netvsc port>, num_rxq,num_txq, &conf);
uint8_t rss_key_temp[64];
struct rte_eth_rss_conf rss_conf = {
.rss_key = rss_key_temp,
.rss_key_len = sizeof(rss_key_temp),
};
ret = rte_eth_dev_rss_hash_conf_get(<VF port>, &rss_conf);


Now the VF port RSS offloads show only TCP flags set and not UDP. I assumed that even the UDP flags might be set. Is this expected ?

Regards,
Nandini


On Thu, Sep 12, 2024 at 4:09 PM Stephen Hemminger <stephen@networkplumber.org<mailto:stephen@networkplumber.org>> wrote:
On Thu, 12 Sep 2024 13:47:37 -0700
Nandini Rangaswamy <nandini.rangaswamy@broadcom.com<mailto:nandini.rangaswamy@broadcom.com>> wrote:

> Thanks for your response Long Li.
> I see with netvsc the maximum number of Tx descriptors is restricted to
> 4096 whereas the number of Rx descriptors is restricted to 8192.
> But, for failsafe PMD , we see that both the number of Txd and Rxd is
> restricted to 8192.
> How is netvsc PMD giving the same performance as failsafe PMD ?
>
> Regards

I think the limits there were somewhat arbitrary chose with netvsc.
Don't remember a hard reason that would block larger sizes.


Having really big rings won't help performance (i.e BufferBloat) and
could a lot of memory consumption. When all heavy data traffic goes through
the VF and that ring is different. Only DoS attacks should be impacted
by rx/tx descriptor limits in the netvsc device. The linux driver actually
has much smaller buffer.

This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 31444 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-09-17 21:57 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-03 21:43 Netvsc vs Failsafe Performance Nandini Rangaswamy
2024-09-04  0:03 ` Stephen Hemminger
2024-09-04  0:21   ` Nandini Rangaswamy
2024-09-04 22:42     ` Stephen Hemminger
2024-09-05  2:30       ` Long Li
2024-09-12 20:47         ` Nandini Rangaswamy
2024-09-12 23:09           ` Stephen Hemminger
2024-09-13 17:56             ` Nandini Rangaswamy
2024-09-13 21:27               ` Long Li
2024-09-13 21:29                 ` Nandini Rangaswamy
2024-09-16 22:58                   ` Nandini Rangaswamy
2024-09-17 21:56                     ` Long Li
2024-09-12 22:02       ` Nandini Rangaswamy
2024-09-12 22:59         ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).