> Subject: Re: Netvsc vs Failsafe Performance
>
> On Tue, 3 Sep 2024 17:21:48 -0700
> Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
>
> > Hi Stephen/Long,
> > dpdk_netvsc_port_configure:1873 Configure port eth2/2. I am testing
> > using TCP traffic (iperf3 tool) generated between pair of client and
> > servers with DPDK app forward traffic between client and servers.
> > These are the config being passed for configuring netvsc:
> > lsc_intr=1
> > rxq/txq=2/2,
> > rss is enabled with rss_hf=0x0000000000000c30
> > tx_ol=0x00000000000006
> > rx_ol=0x00000000080007
> >
> > Rsskey len is 64.
> > struct rte_eth_conf conf = {
> > .intr_conf = {
> > .lsc = !dpdk.lsc_intr_disable && !dpdk_if->lsc_intr_disable &&
> > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC), }, .rxmode = {
> > .mq_mode = RTE_ETH_MQ_RX_RSS, .offloads =
> > RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> > RTE_ETH_RX_OFFLOAD_RSS_HASH |
> RTE_ETH_RX_OFFLOAD_UDP_CKSUM, },
> > .rx_adv_conf.rss_conf = { .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
> > RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> RTE_ETH_RSS_NONFRAG_IPV6_TCP, .rss_key
> > = conf_rss_key, .rss_key_len = rss_key_len, }, .txmode = { .offloads =
> > RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
> RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, },
> >
> > Regards,
> > Nandini
> >
> > On Tue, Sep 3, 2024 at 5:03 PM Stephen Hemminger
> > <stephen@networkplumber.org>
> > wrote:
> >
> > > On Tue, 3 Sep 2024 14:43:28 -0700
> > > Nandini Rangaswamy <nandini.rangaswamy@broadcom.com> wrote:
> > >
> > > > Hi Stephen and Long,
> > > > I was going through one of the netvsc patches
> > > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2F
> > > > mails.dpdk.org%2Farchives%2Fdev%2F2018-
> August%2F110559.html&data=0
> > > >
> 5%7C02%7Clongli%40microsoft.com%7Ce91cca1ee99f4809138708dccd32e76
> 7
> > > > %7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638610865749
> 361006%7
> > > >
> CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB
> TiI
> > > >
> 6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=ms6yhMjBpT0IFu6e9
> wmh4K1
> > > > WDINzgjoRzFJmMJGJwuY%3D&reserved=0 which
> > > mentioned
> > > > that netvsc and failsafe give the same performance in VF path
> > > > whereas for some exception path tests, about 22% performance gain in
> seen.
> > > > I ran some tests locally with my dpdk app integrated with netvsc
> > > > PMD and observed that netvsc does give nearly the same performance
> > > > as failsafe in the VF path.
> > > > Since the official document does not explicitly cite this, I would
> > > > like
> > > to
> > > > confirm if this holds good.
> > > > Regards,
> > > > Nandini
> > > >
> > >
> > > Shouldn't be. What settings are you using.
> > > Both failsafe and netvsc just pass packets to VF if present.
> > > There is even more locks to go through with failsafe.
> > >
> > > Are you sure the test doesn't exercise something like checksumming
> > > which maybe different.
> > >
> >
>
> How many streams? RSS won't matter unless multiple streams.
> The netvsc driver does not have RSS for UDP as a listed flag.
> It turns out that for that version of NDIS, if you ask for TCP RSS, UDP RSS is
> implied.
>
> RSS Key must be 40 bytes (Toeplitz) not 64 bytes.
> Just use the default key (rss_key == NULL rss_key_len = 0) to be safe
>
> Check that packets are going to the VF. One way to do that is to look at xstats
> on both netvsc and mlx5 device.
>
If most traffic goes through the VF, you won't see much difference in performance of netvsc vs failsafe because they are not used on the data path.
It seems the 20% performance gain is measured on synthetic path, meaning those traffic does not go through the VF. In this scenario, netvsc has an advantage over failsafe since the traffic data doesn't need to be copied around in the kernel space.