From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3399E45A7E; Tue, 1 Oct 2024 18:09:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0BADF402A7; Tue, 1 Oct 2024 18:09:31 +0200 (CEST) Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) by mails.dpdk.org (Postfix) with ESMTP id 27D9A40299 for ; Tue, 1 Oct 2024 18:09:29 +0200 (CEST) Received: by mail-pg1-f180.google.com with SMTP id 41be03b00d2f7-7db90a28cf6so4820146a12.0 for ; Tue, 01 Oct 2024 09:09:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; t=1727798968; x=1728403768; darn=dpdk.org; h=cc:to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=oifRSeSMIwGIKY9So+FqsjlXGOxLSuo8XibBmfUJN4U=; b=fu+e4sksPeE9VklBOSFB1r83WHxgmau5ZFm6EfqlSLdBpmC3kMB3eFg/+1m/hFXSPl In94rb23VmSV1O0+wbqc/iPPCFiWVo4kWcTUx0onj89/rHLgpCRRosWWMtVfT3McKnQ0 Rxxq1Lt3oTJ9YxfQKroItU5rKpXOBA8JOt0rJWtUceaN+8nrrW+CxP2KefUq5VWOSpia PDQO/4c73reyq49rBX4p1j50lS9IbZodsgLRPjV7WWf8S+LhBc2NHZA2jr/uUhboWGWV pBhJSivfjzP59oV6eWwWCU4x9govPxMASYFhBMwFUBbqVsCMGXIDhEFjWT4lvmXyJOe9 8+4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727798968; x=1728403768; h=cc:to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=oifRSeSMIwGIKY9So+FqsjlXGOxLSuo8XibBmfUJN4U=; b=WTaCOGGMgWFLyQwSD6TmMi94b+M/0dvOdi5pP7nCpIyMQzpcUt0HEcEaXiSsy1e+Ih N2NPppKfPE5G+dPtru85nbu/Ywg2iOOswibIiKUlZKsptF4Qb8rOTDfkbEsHLLIsSp7C m0HzwxW+bjEw9WCnnxSwhE/df6rGYNgOuXFPFbMXbmNyfNVlVkpgZXQ9vUJQTakvIfDS 17ADwYPYA1nVZ7s+QnFAFikp7/cDR5qXWqEDEyRb0tkOysL14C4AzhUbegDiro8pIrWk 6JYFYrLbudi8tNzq0bUVxZ+3T14uDmeqtt4cKI1VTAkG9wkBl3GGS1zaTUEhBJiumwpL LJUA== X-Gm-Message-State: AOJu0Yyt1UM9bmPFBay9etVJ9tqH1hLmCfSQ4nrpBaNGv7zcGzE0SQVC r7YvBbc+wePBBaYnlCTreXtgZhDRFSiKlbrVYWERhFET4pyXlWo/7EmPR8tICsxcoJNJLkmT77D GM3yHjyVdWKN8lwGPieFT43o+EfX6g4WPwrNkp6b8+3v0WvcRe5s= X-Google-Smtp-Source: AGHT+IGstxaIh/qeTwNu6Ns2wPQ9KWdWhVDTKlcSHeMB1Udn8P0B3FSxO4BW4Sz65/KeyOqpOwnXl34kiEe1262K4zM= X-Received: by 2002:a17:90a:fe87:b0:2d8:7445:7ab2 with SMTP id 98e67ed59e1d1-2e1854a57bdmr104947a91.20.1727798968011; Tue, 01 Oct 2024 09:09:28 -0700 (PDT) MIME-Version: 1.0 From: Edwin Brossette Date: Tue, 1 Oct 2024 18:09:17 +0200 Message-ID: Subject: net/netvsc: problem with configuring netvsc port after first port start To: dev@dpdk.org Cc: Didier Pallard , Laurent Hardy , Olivier Matz , longli@microsoft.com, weh@microsoft.com Content-Type: multipart/alternative; boundary="000000000000bb6a3a06236c8bfa" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000bb6a3a06236c8bfa Content-Type: text/plain; charset="UTF-8" Hello, I have run into an issue since I switched from the failsafe/vdev-netvsc pmd to the netvsc pmd. I have noticed that once I have first started my port, if I try to stop and reconfigure it, the call to rte_eth_dev_configure() fails with a couple of error logged from the netvsc pmd. It can be reproduced quite easily with testpmd. I tried it on my azure set-up: root@dut-azure:~# ip -d l 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:09:b7 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev 00224839-09b7-0022-4839-09b700224839 6: enP27622s2: mtu 1500 qdisc mq master eth1 state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev 6be6:00:02.0 altname enP27622p0s2 7: enP25210s4: mtu 1500 qdisc mq master eth3 state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev 627a:00:02.0 altname enP25210p0s2 8: enP16113s3: mtu 1500 qdisc mq master eth2 state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev 3ef1:00:02.0 altname enP16113p0s2 9: eth1: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev 00224839-02ca-0022-4839-02ca00224839 10: eth2: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev 00224839-0936-0022-4839-093600224839 11: eth3: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev 00224839-0fcd-0022-4839-0fcd00224839 As you can see here, I have 3 netvsc interfaces, eth1, eth2 and eth3 with their 3 nic-accelerated counterparts. I rebind them to uio_hv_generic and start testpmd: root@dut-azure:~# dpdk-testpmd -- -i --rxq=2 --txq=2 --coremask=0x0c --total-num-mbufs=25000 EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: VFIO support initialized mlx5_net: No available register for sampler. mlx5_net: No available register for sampler. mlx5_net: No available register for sampler. hn_vf_attach(): found matching VF port 2 hn_vf_attach(): found matching VF port 0 hn_vf_attach(): found matching VF port 1 Interactive-mode selected previous number of forwarding cores 1 - changed to number of configured cores 2 testpmd: create a new mbuf pool : n=25000, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=paired and odd forward ports number, the last port will pair with itself. Configuring Port 3 (socket 0) Port 3: 00:22:48:39:02:CA Configuring Port 4 (socket 0) Port 4: 00:22:48:39:09:36 Configuring Port 5 (socket 0) Port 5: 00:22:48:39:0F:CD Checking link statuses... Done testpmd> The 3 ports are initialized and started correctly. For example, here is the port info for the first port: testpmd> show port info 3 ********************* Infos for port 3 ********************* MAC address: 00:22:48:39:02:CA Device name: 00224839-02ca-0022-4839-02ca00224839 Driver name: net_netvsc Firmware-version: not available Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 50 Gbps Link duplex: full-duplex Autoneg status: On MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 1 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off, filter off, extend off, qinq strip off Hash key size in bytes: 40 Redirection table size: 128 Supported RSS offload flow types: ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp Minimum size of RX buffer: 1024 Maximum configurable length of RX packet: 65536 Maximum configurable size of LRO aggregated packet: 0 Current number of RX queues: 2 Max possible RX queues: 64 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Current number of TX queues: 2 Max possible TX queues: 64 Max possible number of TXDs per queue: 4096 Min possible number of TXDs per queue: 1 TXDs number alignment: 1 Max segment number per packet: 40 Max segment number per MTU/TSO: 40 Device capabilities: 0x0( ) Device error handling mode: none Device private info: none -> First, stop port: testpmd> port stop 3 Stopping ports... Done -> Then, change something in the port config. This will trigger a call to rte_eth_dev_configure() on the next port start. Here I change the link speed/duplex: testpmd> port config 3 speed 10000 duplex full testpmd> -> Finally, try to start the port: testpmd> port start 3 Configuring Port 3 (socket 0) hn_nvs_alloc_subchans(): nvs subch alloc failed: 0x2 hn_dev_configure(): subchannel configuration failed ETHDEV: Port3 dev_configure = -5 Fail to configure port 3 <------ As you can see, the port configuration fails. The error happens in hn_nvs_alloc_subchans(). Maybe the previous ressources were not properly deallocated on port stop? When I looked around in the pmd's code, I noticed the function hn_reinit() with the following commentary: /* * Connects EXISTING rx/tx queues to NEW vmbus channel(s), and * re-initializes NDIS and RNDIS, including re-sending initial * NDIS/RNDIS configuration. To be used after the underlying vmbus * has been un- and re-mapped, e.g. as must happen when the device * MTU is changed. */ This function shows that it is possible to call rte_eth_dev_configure() without failing, as this funtion hn_reinit() calls hn_dev_configure(), and is called when the mtu is changed. I suspect the operations described in comment above might also be needed when running rte_eth_dev_configure(). Is this a known issue ? In my application case, this bug causes issues when I need to restart and configure the port. Thank you for considering this issue. Regards, Edwin Brossette. --000000000000bb6a3a06236c8bfa Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hello,

I have run into an issue since I switched fr= om the failsafe/vdev-netvsc pmd to the netvsc pmd.
I have noticed that once I have first started my port, if I try to stop=20 and reconfigure it, the call to rte_eth_dev_configure() fails with a=20 couple of error logged from the netvsc pmd. It can be reproduced quite=20 easily with testpmd.

I tried it on my azure set-up:

root@dut= -azure:~# ip -d l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc no= queue state UNKNOWN mode DEFAULT group default qlen 1000
=C2=A0 =C2=A0 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0= =20 minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1=20 gso_max_size 65536 gso_max_segs 65535
2: eth0: <BROADCAST,MULTICAST,= UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen = 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:09:b7 brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64=20 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev=20 00224839-09b7-0022-4839-09b700224839
6: enP27622s2: <BROADCAST,MULTI= CAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth1 state UP mode DEFA= ULT group default qlen 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8=20 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev=20 6be6:00:02.0
=C2=A0 =C2=A0 altname enP27622p0s2
7: enP25210s4: <B= ROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth3 stat= e UP mode DEFAULT group default qlen 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8=20 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev=20 627a:00:02.0
=C2=A0 =C2=A0 altname enP25210p0s2
8: enP16113s3: <B= ROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth2 stat= e UP mode DEFAULT group default qlen 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 64 numrxqueues 8=20 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev=20 3ef1:00:02.0
=C2=A0 =C2=A0 altname enP16113p0s2
9: eth1: <BROADCA= ST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group = default qlen 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:02:ca brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64=20 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev=20 00224839-02ca-0022-4839-02ca00224839
10: eth2: <BROADCAST,MULTICAST,= UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen = 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64=20 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev=20 00224839-0936-0022-4839-093600224839
11: eth3: <BROADCAST,MULTICAST,= UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen = 1000
=C2=A0 =C2=A0 link/ether 00:22:48:39:0f:cd brd ff:ff:ff:ff:ff:ff promiscuity 0=20 minmtu 68 maxmtu 65521 addrgenmode eui64 numtxqueues 64 numrxqueues 64=20 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev=20 00224839-0fcd-0022-4839-0fcd00224839

As you can see = here, I have 3 netvsc interfaces, eth1, eth2 and eth3 with their 3 nic-acce= lerated counterparts.
I rebind them to uio_hv_generic and start te= stpmd:

root@dut-azure:~# dpdk-testpmd -- -i --rxq=3D2 --txq=3D2 --co= remask=3D0x0c --total-num-mbufs=3D25000
EAL: Detected CPU lcores: 8
E= AL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: = Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode= 'PA'
EAL: VFIO support initialized
mlx5_net: No available re= gister for sampler.
mlx5_net: No available register for sampler.
mlx5= _net: No available register for sampler.
hn_vf_attach(): found matching = VF port 2
hn_vf_attach(): found matching VF port 0
hn_vf_attach(): fo= und matching VF port 1
Interactive-mode selected
previous number of f= orwarding cores 1 - changed to number of configured cores 2
testpmd: cre= ate a new mbuf pool <mb_pool_0>: n=3D25000, size=3D2176, socket=3D0testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-= topology=3Dpaired and odd forward ports number, the last port will pair wit= h itself.

Configuring Port 3 (socket 0)
Port 3: 00:22:48:39:02:CA=
Configuring Port 4 (socket 0)
Port 4: 00:22:48:39:09:36
Configuri= ng Port 5 (socket 0)
Port 5: 00:22:48:39:0F:CD
Checking link statuses= ...
Done
testpmd>


The 3 ports are= initialized and started correctly. For example, here is the port info for = the first port:

testpmd> show port info 3

****************= ***** Infos for port 3 =C2=A0*********************
MAC address: 00:22:48= :39:02:CA
Device name: 00224839-02ca-0022-4839-02ca00224839
Driver na= me: net_netvsc
Firmware-version: not available
Connect to socket: 0memory allocation on the socket: 0
Link status: up
Link speed: 50 G= bps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promi= scuous mode: enabled
Allmulticast mode: disabled
Maximum number of MA= C addresses: 1
Maximum number of MAC addresses of hash filtering: 0
V= LAN offload:
=C2=A0 strip off, filter off, extend off, qinq strip offHash key size in bytes: 40
Redirection table size: 128
Supported RS= S offload flow types:
=C2=A0 ipv4 =C2=A0ipv4-tcp =C2=A0ipv4-udp =C2=A0ip= v6 =C2=A0ipv6-tcp
Minimum size of RX buffer: 1024
Maximum configurabl= e length of RX packet: 65536
Maximum configurable size of LRO aggregated= packet: 0
Current number of RX queues: 2
Max possible RX queues: 64<= br>Max possible number of RXDs per queue: 65535
Min possible number of R= XDs per queue: 0
RXDs number alignment: 1
Current number of TX queues= : 2
Max possible TX queues: 64
Max possible number of TXDs per queue:= 4096
Min possible number of TXDs per queue: 1
TXDs number alignment:= 1
Max segment number per packet: 40
Max segment number per MTU/TSO: = 40
Device capabilities: 0x0( )
Device error handling mode: none
De= vice private info:
=C2=A0 none

=C2=A0-> First, stop port:
<= br>testpmd> port stop 3
Stopping ports...
Done

=C2=A0-> Then, change something in the port config. This will trigger a call=20 to=C2=A0rte_eth_dev_configure() on the next port start. Here I change the= =20 link speed/duplex:
testpmd> port config 3 speed 10000 duplex full
= testpmd>

=C2=A0-> Finally, try to start the port:

testp= md> port start 3
Configuring Port 3 (socket 0)
hn_nvs_alloc_subcha= ns(): nvs subch alloc failed: 0x2
hn_dev_configure(): subchannel configu= ration failed
ETHDEV: Port3 dev_configure =3D -5
Fail to configure po= rt 3 =C2=A0<------


As you can see, the port configuratio= n fails.
The error happens in hn_nvs_alloc_subchans(). Maybe the = previous ressources were not properly deallocated on port stop?

When I looked around in the pmd's code, I noticed the func= tion hn_reinit() with the following commentary:

/*
=C2=A0* Connec= ts EXISTING rx/tx queues to NEW vmbus channel(s), and
=C2=A0* re-initial= izes NDIS and RNDIS, including re-sending initial
=C2=A0* NDIS/RNDIS con= figuration. To be used after the underlying vmbus
=C2=A0* has been un- a= nd re-mapped, e.g. as must happen when the device
=C2=A0* MTU is changed= .
=C2=A0*/

This function shows that it is possible to call rte_eth_dev_configure()=20 without failing, as this funtion hn_reinit() calls hn_dev_configure(),=20 and is called when the mtu is changed.
I suspect the operations describe= d in comment above might also be needed when running rte_eth_dev_configure(= ).

Is this a known issue ?
In my application case, this bug cause= s issues when I need to restart and configure the port.
Thank you for co= nsidering this issue.

Regards,
Edwin Brossette. --000000000000bb6a3a06236c8bfa--