DPDK patches and discussions
 help / color / mirror / Atom feed
* net/netvsc: problem with configuring netvsc port after first port start
@ 2024-10-01 16:09 Edwin Brossette
  2024-10-01 17:27 ` Stephen Hemminger
  0 siblings, 1 reply; 2+ messages in thread
From: Edwin Brossette @ 2024-10-01 16:09 UTC (permalink / raw)
  To: dev; +Cc: Didier Pallard, Laurent Hardy, Olivier Matz, longli, weh

[-- Attachment #1: Type: text/plain, Size: 7473 bytes --]

Hello,

I have run into an issue since I switched from the failsafe/vdev-netvsc pmd
to the netvsc pmd.
I have noticed that once I have first started my port, if I try to stop and
reconfigure it, the call to rte_eth_dev_configure() fails with a couple of
error logged from the netvsc pmd. It can be reproduced quite easily with
testpmd.

I tried it on my azure set-up:

root@dut-azure:~# ip -d l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1
gso_max_size 65536 gso_max_segs 65535
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode
DEFAULT group default qlen 1000
    link/ether 00:22:48:39:09:b7 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-09b7-0022-4839-09b700224839
6: enP27622s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
master eth1 state UP mode DEFAULT group default qlen 1000
    link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8 gso_max_size
65536 gso_max_segs 65535 parentbus pci parentdev 6be6:00:02.0
    altname enP27622p0s2
7: enP25210s4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
master eth3 state UP mode DEFAULT group default qlen 1000
    link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8 gso_max_size
65536 gso_max_segs 65535 parentbus pci parentdev 627a:00:02.0
    altname enP25210p0s2
8: enP16113s3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
master eth2 state UP mode DEFAULT group default qlen 1000
    link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8 gso_max_size
65536 gso_max_segs 65535 parentbus pci parentdev 3ef1:00:02.0
    altname enP16113p0s2
9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode
DEFAULT group default qlen 1000
    link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-02ca-0022-4839-02ca00224839
10: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode
DEFAULT group default qlen 1000
    link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-0936-0022-4839-093600224839
11: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode
DEFAULT group default qlen 1000
    link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu
68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64
gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev
00224839-0fcd-0022-4839-0fcd00224839

As you can see here, I have 3 netvsc interfaces, eth1, eth2 and eth3 with
their 3 nic-accelerated counterparts.
I rebind them to uio_hv_generic and start testpmd:

root@dut-azure:~# dpdk-testpmd -- -i --rxq=2 --txq=2 --coremask=0x0c
--total-num-mbufs=25000
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
mlx5_net: No available register for sampler.
mlx5_net: No available register for sampler.
mlx5_net: No available register for sampler.
hn_vf_attach(): found matching VF port 2
hn_vf_attach(): found matching VF port 0
hn_vf_attach(): found matching VF port 1
Interactive-mode selected
previous number of forwarding cores 1 - changed to number of configured
cores 2
testpmd: create a new mbuf pool <mb_pool_0>: n=25000, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.

Configuring Port 3 (socket 0)
Port 3: 00:22:48:39:02:CA
Configuring Port 4 (socket 0)
Port 4: 00:22:48:39:09:36
Configuring Port 5 (socket 0)
Port 5: 00:22:48:39:0F:CD
Checking link statuses...
Done
testpmd>


The 3 ports are initialized and started correctly. For example, here is the
port info for the first port:

testpmd> show port info 3

********************* Infos for port 3  *********************
MAC address: 00:22:48:39:02:CA
Device name: 00224839-02ca-0022-4839-02ca00224839
Driver name: net_netvsc
Firmware-version: not available
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 50 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 128
Supported RSS offload flow types:
  ipv4  ipv4-tcp  ipv4-udp  ipv6  ipv6-tcp
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 65536
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 2
Max possible RX queues: 64
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 2
Max possible TX queues: 64
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 1
TXDs number alignment: 1
Max segment number per packet: 40
Max segment number per MTU/TSO: 40
Device capabilities: 0x0( )
Device error handling mode: none
Device private info:
  none

 -> First, stop port:

testpmd> port stop 3
Stopping ports...
Done

 -> Then, change something in the port config. This will trigger a call
to rte_eth_dev_configure() on the next port start. Here I change the link
speed/duplex:
testpmd> port config 3 speed 10000 duplex full
testpmd>

 -> Finally, try to start the port:

testpmd> port start 3
Configuring Port 3 (socket 0)
hn_nvs_alloc_subchans(): nvs subch alloc failed: 0x2
hn_dev_configure(): subchannel configuration failed
ETHDEV: Port3 dev_configure = -5
Fail to configure port 3  <------


As you can see, the port configuration fails.
The error happens in hn_nvs_alloc_subchans(). Maybe the previous ressources
were not properly deallocated on port stop?

When I looked around in the pmd's code, I noticed the function hn_reinit()
with the following commentary:

/*
 * Connects EXISTING rx/tx queues to NEW vmbus channel(s), and
 * re-initializes NDIS and RNDIS, including re-sending initial
 * NDIS/RNDIS configuration. To be used after the underlying vmbus
 * has been un- and re-mapped, e.g. as must happen when the device
 * MTU is changed.
 */

This function shows that it is possible to call rte_eth_dev_configure()
without failing, as this funtion hn_reinit() calls hn_dev_configure(), and
is called when the mtu is changed.
I suspect the operations described in comment above might also be needed
when running rte_eth_dev_configure().

Is this a known issue ?
In my application case, this bug causes issues when I need to restart and
configure the port.
Thank you for considering this issue.

Regards,
Edwin Brossette.

[-- Attachment #2: Type: text/html, Size: 8244 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: net/netvsc: problem with configuring netvsc port after first port start
  2024-10-01 16:09 net/netvsc: problem with configuring netvsc port after first port start Edwin Brossette
@ 2024-10-01 17:27 ` Stephen Hemminger
  0 siblings, 0 replies; 2+ messages in thread
From: Stephen Hemminger @ 2024-10-01 17:27 UTC (permalink / raw)
  To: Edwin Brossette
  Cc: dev, Didier Pallard, Laurent Hardy, Olivier Matz, longli, weh

On Tue, 1 Oct 2024 18:09:17 +0200
Edwin Brossette <edwin.brossette@6wind.com> wrote:

> testpmd> port stop 3  
> Stopping ports...
> Done
> 
>  -> Then, change something in the port config. This will trigger a call  
> to rte_eth_dev_configure() on the next port start. Here I change the link
> speed/duplex:
> testpmd> port config 3 speed 10000 duplex full
> testpmd>  
> 
>  -> Finally, try to start the port:  
> 
> testpmd> port start 3  
> Configuring Port 3 (socket 0)
> hn_nvs_alloc_subchans(): nvs subch alloc failed: 0x2
> hn_dev_configure(): subchannel configuration failed
> ETHDEV: Port3 dev_configure = -5
> Fail to configure port 3  <------
> 
> 
> As you can see, the port configuration fails.
> The error happens in hn_nvs_alloc_subchans(). Maybe the previous ressources
> were not properly deallocated on port stop?

A "channel" is the VMBUS instance used to communicate with the host.
The "sub-channel" is a the secondary channel associated with multi-queue.

There does not appear to be an NVSP operation to deallocate secondary
channels. Other versions (FreeBSD, Linux) do not allow reconfiguring
the number of queues; instead the subchannels are created when device
is attached.

Looks like a VMBUS protocol limitation in the internal API's.

When using failsafe/tun the driver is actually faking the number of queues.
Since TAP device queues are between userspace (DPDK PMD) and the kernel.
They TAP queues are not necessarily associated with the netvsc kernel driver queues.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-10-01 17:28 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-01 16:09 net/netvsc: problem with configuring netvsc port after first port start Edwin Brossette
2024-10-01 17:27 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).