DPDK usage discussions
 help / color / mirror / Atom feed
* Enable RSS for virtio application ( dpdk version 21.11)
@ 2023-07-26  5:05 shiv chittora
  2023-07-26  7:03 ` Bing Zhao
  0 siblings, 1 reply; 4+ messages in thread
From: shiv chittora @ 2023-07-26  5:05 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 7022 bytes --]

I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based
application.
Application is failing during rte_eth_dev_configure . For our application,
RSS support is required.

eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS;
static uint8_t hashKey[] = {
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
        };

        eth_config.rx_adv_conf.rss_conf.rss_key = hashKey;
        eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey);
eth_config.rx_adv_conf.rss_conf.rss_hf = 260



With the aforementioned RSS configuration, the application is not coming
up. The same application runs without any issues on a VMware virtual
machine.

When I set

    eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE
eth_config.rx_adv_conf.rss_conf.rss_hf = 0

Application starts working fine. Since we need RSS support for our
application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE.

I looked at the DPDK 21.11 release notes, and it mentions that virtio_net
supports RSS support.


In this application traffic is tapped to capture port. I have also created
two queues using ACLI comments.

<acropolis> vm.nic_create nutms1-ms type=kNetworkFunctionNic
network_function_nic_type=kTap queues=2

<acropolis> vm.nic_get testvm
xx:xx:xx:xx:xx:xx {
  mac_addr: "xx:xx:xx:xx:xx:xx"
  network_function_nic_type: "kTap"
  network_type: "kNativeNetwork"
  queues: 2
  type: "kNetworkFunctionNic"
  uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef"
}


Additionally, I've turned on dpdk logging. PFB the dpdk log's output.

EAL: PCI device 0000:00:05.0 on NUMA socket 0
EAL:   probe driver: 1af4:1000 net_virtio
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket
0)
EAL:   PCI memory mapped at 0x940000000
EAL:   PCI memory mapped at 0x940001000
virtio_read_caps(): [98] skipping non VNDR cap id: 11
virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4096
virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
virtio_read_caps(): found modern virtio pci device.
virtio_read_caps(): common cfg mapped at: 0x940001000
virtio_read_caps(): device cfg mapped at: 0x940003000
virtio_read_caps(): isr cfg mapped at: 0x940002000
virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
vtpci_init(): modern virtio pci detected.
virtio_ethdev_negotiate_features(): guest_features before negotiate =
8000005f10ef8028
virtio_ethdev_negotiate_features(): host_features before negotiate =
130ffffa7
virtio_ethdev_negotiate_features(): features after negotiate = 110ef8020
virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_device(): link speed = -1, duplex = 1
virtio_init_device(): config->max_virtqueue_pairs=2
virtio_init_device(): config->status=1
virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_queue(): setting up queue: 0 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffab000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffab000
virtio_init_vring():  >>
modern_setup_queue(): queue 0 addresses:
modern_setup_queue():    desc_addr: 7fffab000
modern_setup_queue():    aval_addr: 7fffac000
modern_setup_queue():    used_addr: 7fffad000
modern_setup_queue():    notify addr: 0x940004000 (notify offset: 0)
virtio_init_queue(): setting up queue: 1 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffa6000
virtio_init_vring():  >>
modern_setup_queue(): queue 1 addresses:
modern_setup_queue():    desc_addr: 7fffa6000
modern_setup_queue():    aval_addr: 7fffa7000
modern_setup_queue():    used_addr: 7fffa8000
modern_setup_queue():    notify addr: 0x940004004 (notify offset: 1)
virtio_init_queue(): setting up queue: 2 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fff98000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000
virtio_init_vring():  >>
modern_setup_queue(): queue 2 addresses:
modern_setup_queue():    desc_addr: 7fff98000
modern_setup_queue():    aval_addr: 7fff99000
modern_setup_queue():    used_addr: 7fff9a000
modern_setup_queue():    notify addr: 0x940004008 (notify offset: 2)
virtio_init_queue(): setting up queue: 3 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fff93000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff93000
virtio_init_vring():  >>
modern_setup_queue(): queue 3 addresses:
modern_setup_queue():    desc_addr: 7fff93000
modern_setup_queue():    aval_addr: 7fff94000
modern_setup_queue():    used_addr: 7fff95000
modern_setup_queue():    notify addr: 0x94000400c (notify offset: 3)
virtio_init_queue(): setting up queue: 4 on NUMA node 0
virtio_init_queue(): vq_size: 64
virtio_init_queue(): vring_size: 4612, rounded_vring_size: 8192
virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff87000
virtio_init_vring():  >>
modern_setup_queue(): queue 4 addresses:
modern_setup_queue():    desc_addr: 7fff87000
modern_setup_queue():    aval_addr: 7fff87400
modern_setup_queue():    used_addr: 7fff88000
modern_setup_queue():    notify addr: 0x940004010 (notify offset: 4)
eth_virtio_pci_init(): port 0 vendorID=0x1af4 deviceID=0x1000
EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
EAL: lib.telemetry log level changed from disabled to debug
TELEMETRY: Attempting socket bind to path
'/var/run/dpdk/rte/dpdk_telemetry.v2'
TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dpdk_telemetry.v2'
failed.
TELEMETRY: Attempting unlink and retrying bind
TELEMETRY: Socket creation and binding ok
TELEMETRY: Telemetry initialized ok
TELEMETRY: No legacy callbacks, legacy socket not created
[Wed Jul 26 04:44:42 2023][ms_dpi: 28098] DPDK Initialised
[Wed Jul 26 04:44:42 2023][ms_dpi: 28098] Finished DPDK logging session


The following result is produced when testpmd runs the RSS configuration
command.

testpmd> port config all rss all
Port 0 modified RSS hash function based on hardware
support,requested:0x17f83fffc configured:0
Multi-queue RSS mode isn't enabled.
Configuration of RSS hash at ethernet port 0 failed with error (95):
Operation not supported.


Any suggestions on how to enable RSS support in this situation would be
greatly appreciated.

Thank you for your assistance.

[-- Attachment #2: Type: text/html, Size: 7789 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: Enable RSS for virtio application ( dpdk version 21.11)
  2023-07-26  5:05 Enable RSS for virtio application ( dpdk version 21.11) shiv chittora
@ 2023-07-26  7:03 ` Bing Zhao
  2023-07-26  7:32   ` shiv chittora
  0 siblings, 1 reply; 4+ messages in thread
From: Bing Zhao @ 2023-07-26  7:03 UTC (permalink / raw)
  To: shiv chittora, users

[-- Attachment #1: Type: text/plain, Size: 7719 bytes --]

IIRC, the “VIRTIO_NET_F_RSS” is some capability reported and decided during the driver setup/communication stage. It is mostly like that your libs/drivers running on the host for the VM does not support this feature.
Have you tried to update the versions of VM or the package/lib of VirtIO for this VM?

From: shiv chittora <shiv.chittora@gmail.com>
Sent: Wednesday, July 26, 2023 1:05 PM
To: users@dpdk.org
Subject: Enable RSS for virtio application ( dpdk version 21.11)

External email: Use caution opening links or attachments

I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based application.
Application is failing during rte_eth_dev_configure . For our application, RSS support is required.

eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS;
static uint8_t hashKey[] = {
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
            0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
        };

        eth_config.rx_adv_conf.rss_conf.rss_key = hashKey;
        eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey);
eth_config.rx_adv_conf.rss_conf.rss_hf = 260



With the aforementioned RSS configuration, the application is not coming up. The same application runs without any issues on a VMware virtual machine.

When I set

    eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE
eth_config.rx_adv_conf.rss_conf.rss_hf = 0

Application starts working fine. Since we need RSS support for our application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE.

I looked at the DPDK 21.11 release notes, and it mentions that virtio_net supports RSS support.


In this application traffic is tapped to capture port. I have also created two queues using ACLI comments.

<acropolis> vm.nic_create nutms1-ms type=kNetworkFunctionNic network_function_nic_type=kTap queues=2

<acropolis> vm.nic_get testvm
xx:xx:xx:xx:xx:xx {
  mac_addr: "xx:xx:xx:xx:xx:xx"
  network_function_nic_type: "kTap"
  network_type: "kNativeNetwork"
  queues: 2
  type: "kNetworkFunctionNic"
  uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef"
}


Additionally, I've turned on dpdk logging. PFB the dpdk log's output.

EAL: PCI device 0000:00:05.0 on NUMA socket 0
EAL:   probe driver: 1af4:1000 net_virtio
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket 0)
EAL:   PCI memory mapped at 0x940000000
EAL:   PCI memory mapped at 0x940001000
virtio_read_caps(): [98] skipping non VNDR cap id: 11
virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4096
virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
virtio_read_caps(): found modern virtio pci device.
virtio_read_caps(): common cfg mapped at: 0x940001000
virtio_read_caps(): device cfg mapped at: 0x940003000
virtio_read_caps(): isr cfg mapped at: 0x940002000
virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
vtpci_init(): modern virtio pci detected.
virtio_ethdev_negotiate_features(): guest_features before negotiate = 8000005f10ef8028
virtio_ethdev_negotiate_features(): host_features before negotiate = 130ffffa7
virtio_ethdev_negotiate_features(): features after negotiate = 110ef8020
virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_device(): link speed = -1, duplex = 1
virtio_init_device(): config->max_virtqueue_pairs=2
virtio_init_device(): config->status=1
virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_queue(): setting up queue: 0 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffab000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffab000
virtio_init_vring():  >>
modern_setup_queue(): queue 0 addresses:
modern_setup_queue():    desc_addr: 7fffab000
modern_setup_queue():    aval_addr: 7fffac000
modern_setup_queue():    used_addr: 7fffad000
modern_setup_queue():    notify addr: 0x940004000 (notify offset: 0)
virtio_init_queue(): setting up queue: 1 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffa6000
virtio_init_vring():  >>
modern_setup_queue(): queue 1 addresses:
modern_setup_queue():    desc_addr: 7fffa6000
modern_setup_queue():    aval_addr: 7fffa7000
modern_setup_queue():    used_addr: 7fffa8000
modern_setup_queue():    notify addr: 0x940004004 (notify offset: 1)
virtio_init_queue(): setting up queue: 2 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fff98000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000
virtio_init_vring():  >>
modern_setup_queue(): queue 2 addresses:
modern_setup_queue():    desc_addr: 7fff98000
modern_setup_queue():    aval_addr: 7fff99000
modern_setup_queue():    used_addr: 7fff9a000
modern_setup_queue():    notify addr: 0x940004008 (notify offset: 2)
virtio_init_queue(): setting up queue: 3 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0x7fff93000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff93000
virtio_init_vring():  >>
modern_setup_queue(): queue 3 addresses:
modern_setup_queue():    desc_addr: 7fff93000
modern_setup_queue():    aval_addr: 7fff94000
modern_setup_queue():    used_addr: 7fff95000
modern_setup_queue():    notify addr: 0x94000400c (notify offset: 3)
virtio_init_queue(): setting up queue: 4 on NUMA node 0
virtio_init_queue(): vq_size: 64
virtio_init_queue(): vring_size: 4612, rounded_vring_size: 8192
virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff87000
virtio_init_vring():  >>
modern_setup_queue(): queue 4 addresses:
modern_setup_queue():    desc_addr: 7fff87000
modern_setup_queue():    aval_addr: 7fff87400
modern_setup_queue():    used_addr: 7fff88000
modern_setup_queue():    notify addr: 0x940004010 (notify offset: 4)
eth_virtio_pci_init(): port 0 vendorID=0x1af4 deviceID=0x1000
EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
EAL: lib.telemetry log level changed from disabled to debug
TELEMETRY: Attempting socket bind to path '/var/run/dpdk/rte/dpdk_telemetry.v2'
TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dpdk_telemetry.v2' failed.
TELEMETRY: Attempting unlink and retrying bind
TELEMETRY: Socket creation and binding ok
TELEMETRY: Telemetry initialized ok
TELEMETRY: No legacy callbacks, legacy socket not created
[Wed Jul 26 04:44:42 2023][ms_dpi: 28098] DPDK Initialised
[Wed Jul 26 04:44:42 2023][ms_dpi: 28098] Finished DPDK logging session


The following result is produced when testpmd runs the RSS configuration command.

testpmd> port config all rss all
Port 0 modified RSS hash function based on hardware support,requested:0x17f83fffc configured:0
Multi-queue RSS mode isn't enabled.
Configuration of RSS hash at ethernet port 0 failed with error (95): Operation not supported.


Any suggestions on how to enable RSS support in this situation would be greatly appreciated.

Thank you for your assistance.

[-- Attachment #2: Type: text/html, Size: 11552 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Enable RSS for virtio application ( dpdk version 21.11)
  2023-07-26  7:03 ` Bing Zhao
@ 2023-07-26  7:32   ` shiv chittora
  2023-07-28 14:08     ` Thomas Monjalon
  0 siblings, 1 reply; 4+ messages in thread
From: shiv chittora @ 2023-07-26  7:32 UTC (permalink / raw)
  To: Bing Zhao; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 9077 bytes --]

Thanks Bing for quick response.

The virtio driver version 1.0.0 is included in the Linux kernel version 4.9
that powers VM.

ethtool -i eth1
driver: virtio_net
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:04.0

Nutanix document stats that "Ensure the AHV UVM is running the latest
Nutanix VirtIO driver package. Nutanix VirtIO 1.1.6 or higher is required
for RSS support. " Linux kernel version: 5.4 and later will have Virtio
1.1.6.

Since the programme is built on the dpdk, the PMD driver will use the eth
interface rather than the one that the kernel provides. I apologise if I'm
mistaken. RSS is supported by the dpdk PMD version in use.

Because of the client-centric nature of this application, upgrading the
kernel will be challenging.

Do you believe that the only option is to upgrade the vm kernel version?

Thanks ,
Shiv

On Wed, Jul 26, 2023 at 12:33 PM Bing Zhao <bingz@nvidia.com> wrote:

> IIRC, the “VIRTIO_NET_F_RSS” is some capability reported and decided
> during the driver setup/communication stage. It is mostly like that your
> libs/drivers running on the host for the VM does not support this feature.
>
> Have you tried to update the versions of VM or the package/lib of VirtIO
> for this VM?
>
>
>
> *From:* shiv chittora <shiv.chittora@gmail.com>
> *Sent:* Wednesday, July 26, 2023 1:05 PM
> *To:* users@dpdk.org
> *Subject:* Enable RSS for virtio application ( dpdk version 21.11)
>
>
>
> *External email: Use caution opening links or attachments*
>
>
>
> I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based
> application.
> Application is failing during rte_eth_dev_configure . For our application,
> RSS support is required.
>
> eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS;
> static uint8_t hashKey[] = {
>             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
>             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
>             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
>             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
>             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
>         };
>
>         eth_config.rx_adv_conf.rss_conf.rss_key = hashKey;
>         eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey);
> eth_config.rx_adv_conf.rss_conf.rss_hf = 260
>
>
>
> With the aforementioned RSS configuration, the application is not coming
> up. The same application runs without any issues on a VMware virtual
> machine.
>
> When I set
>
>     eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE
> eth_config.rx_adv_conf.rss_conf.rss_hf = 0
>
> Application starts working fine. Since we need RSS support for our
> application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE.
>
> I looked at the DPDK 21.11 release notes, and it mentions that virtio_net
> supports RSS support.
>
>
> In this application traffic is tapped to capture port. I have also created
> two queues using ACLI comments.
>
> <acropolis> vm.nic_create nutms1-ms type=kNetworkFunctionNic
> network_function_nic_type=kTap queues=2
>
> <acropolis> vm.nic_get testvm
> xx:xx:xx:xx:xx:xx {
>   mac_addr: "xx:xx:xx:xx:xx:xx"
>   network_function_nic_type: "kTap"
>   network_type: "kNativeNetwork"
>   queues: 2
>   type: "kNetworkFunctionNic"
>   uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef"
> }
>
>
> Additionally, I've turned on dpdk logging. PFB the dpdk log's output.
>
> EAL: PCI device 0000:00:05.0 on NUMA socket 0
> EAL:   probe driver: 1af4:1000 net_virtio
> EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket
> 0)
> EAL:   PCI memory mapped at 0x940000000
> EAL:   PCI memory mapped at 0x940001000
> virtio_read_caps(): [98] skipping non VNDR cap id: 11
> virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
> virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4096
> virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
> virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
> virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
> virtio_read_caps(): found modern virtio pci device.
> virtio_read_caps(): common cfg mapped at: 0x940001000
> virtio_read_caps(): device cfg mapped at: 0x940003000
> virtio_read_caps(): isr cfg mapped at: 0x940002000
> virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
> vtpci_init(): modern virtio pci detected.
> virtio_ethdev_negotiate_features(): guest_features before negotiate =
> 8000005f10ef8028
> virtio_ethdev_negotiate_features(): host_features before negotiate =
> 130ffffa7
> virtio_ethdev_negotiate_features(): features after negotiate = 110ef8020
> virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
> virtio_init_device(): link speed = -1, duplex = 1
> virtio_init_device(): config->max_virtqueue_pairs=2
> virtio_init_device(): config->status=1
> virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
> virtio_init_queue(): setting up queue: 0 on NUMA node 0
> virtio_init_queue(): vq_size: 256
> virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> virtio_init_queue(): vq->vq_ring_mem: 0x7fffab000
> virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffab000
> virtio_init_vring():  >>
> modern_setup_queue(): queue 0 addresses:
> modern_setup_queue():    desc_addr: 7fffab000
> modern_setup_queue():    aval_addr: 7fffac000
> modern_setup_queue():    used_addr: 7fffad000
> modern_setup_queue():    notify addr: 0x940004000 (notify offset: 0)
> virtio_init_queue(): setting up queue: 1 on NUMA node 0
> virtio_init_queue(): vq_size: 256
> virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000
> virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffa6000
> virtio_init_vring():  >>
> modern_setup_queue(): queue 1 addresses:
> modern_setup_queue():    desc_addr: 7fffa6000
> modern_setup_queue():    aval_addr: 7fffa7000
> modern_setup_queue():    used_addr: 7fffa8000
> modern_setup_queue():    notify addr: 0x940004004 (notify offset: 1)
> virtio_init_queue(): setting up queue: 2 on NUMA node 0
> virtio_init_queue(): vq_size: 256
> virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> virtio_init_queue(): vq->vq_ring_mem: 0x7fff98000
> virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000
> virtio_init_vring():  >>
> modern_setup_queue(): queue 2 addresses:
> modern_setup_queue():    desc_addr: 7fff98000
> modern_setup_queue():    aval_addr: 7fff99000
> modern_setup_queue():    used_addr: 7fff9a000
> modern_setup_queue():    notify addr: 0x940004008 (notify offset: 2)
> virtio_init_queue(): setting up queue: 3 on NUMA node 0
> virtio_init_queue(): vq_size: 256
> virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> virtio_init_queue(): vq->vq_ring_mem: 0x7fff93000
> virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff93000
> virtio_init_vring():  >>
> modern_setup_queue(): queue 3 addresses:
> modern_setup_queue():    desc_addr: 7fff93000
> modern_setup_queue():    aval_addr: 7fff94000
> modern_setup_queue():    used_addr: 7fff95000
> modern_setup_queue():    notify addr: 0x94000400c (notify offset: 3)
> virtio_init_queue(): setting up queue: 4 on NUMA node 0
> virtio_init_queue(): vq_size: 64
> virtio_init_queue(): vring_size: 4612, rounded_vring_size: 8192
> virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000
> virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff87000
> virtio_init_vring():  >>
> modern_setup_queue(): queue 4 addresses:
> modern_setup_queue():    desc_addr: 7fff87000
> modern_setup_queue():    aval_addr: 7fff87400
> modern_setup_queue():    used_addr: 7fff88000
> modern_setup_queue():    notify addr: 0x940004010 (notify offset: 4)
> eth_virtio_pci_init(): port 0 vendorID=0x1af4 deviceID=0x1000
> EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
> EAL: lib.telemetry log level changed from disabled to debug
> TELEMETRY: Attempting socket bind to path
> '/var/run/dpdk/rte/dpdk_telemetry.v2'
> TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dpdk_telemetry.v2'
> failed.
> TELEMETRY: Attempting unlink and retrying bind
> TELEMETRY: Socket creation and binding ok
> TELEMETRY: Telemetry initialized ok
> TELEMETRY: No legacy callbacks, legacy socket not created
> [Wed Jul 26 04:44:42 2023][ms_dpi: 28098] DPDK Initialised
> [Wed Jul 26 04:44:42 2023][ms_dpi: 28098] Finished DPDK logging session
>
>
> The following result is produced when testpmd runs the RSS configuration
> command.
>
> testpmd> port config all rss all
> Port 0 modified RSS hash function based on hardware
> support,requested:0x17f83fffc configured:0
> Multi-queue RSS mode isn't enabled.
> Configuration of RSS hash at ethernet port 0 failed with error (95):
> Operation not supported.
>
>
> Any suggestions on how to enable RSS support in this situation would be
> greatly appreciated.
>
> Thank you for your assistance.
>

[-- Attachment #2: Type: text/html, Size: 11224 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Enable RSS for virtio application ( dpdk version 21.11)
  2023-07-26  7:32   ` shiv chittora
@ 2023-07-28 14:08     ` Thomas Monjalon
  0 siblings, 0 replies; 4+ messages in thread
From: Thomas Monjalon @ 2023-07-28 14:08 UTC (permalink / raw)
  To: shiv chittora; +Cc: Bing Zhao, users, maxime.coquelin

You may need vhost running in the Linux kernel with some BPF code.
There is a documentation about eBPF RSS:
https://qemu.readthedocs.io/en/latest/devel/ebpf_rss.html


26/07/2023 09:32, shiv chittora:
> Thanks Bing for quick response.
> 
> The virtio driver version 1.0.0 is included in the Linux kernel version 4.9
> that powers VM.
> 
> ethtool -i eth1
> driver: virtio_net
> version: 1.0.0
> firmware-version:
> expansion-rom-version:
> bus-info: 0000:00:04.0
> 
> Nutanix document stats that "Ensure the AHV UVM is running the latest
> Nutanix VirtIO driver package. Nutanix VirtIO 1.1.6 or higher is required
> for RSS support. " Linux kernel version: 5.4 and later will have Virtio
> 1.1.6.
> 
> Since the programme is built on the dpdk, the PMD driver will use the eth
> interface rather than the one that the kernel provides. I apologise if I'm
> mistaken. RSS is supported by the dpdk PMD version in use.
> 
> Because of the client-centric nature of this application, upgrading the
> kernel will be challenging.
> 
> Do you believe that the only option is to upgrade the vm kernel version?
> 
> Thanks ,
> Shiv
> 
> On Wed, Jul 26, 2023 at 12:33 PM Bing Zhao <bingz@nvidia.com> wrote:
> 
> > IIRC, the “VIRTIO_NET_F_RSS” is some capability reported and decided
> > during the driver setup/communication stage. It is mostly like that your
> > libs/drivers running on the host for the VM does not support this feature.
> >
> > Have you tried to update the versions of VM or the package/lib of VirtIO
> > for this VM?
> >
> >
> >
> > *From:* shiv chittora <shiv.chittora@gmail.com>
> > *Sent:* Wednesday, July 26, 2023 1:05 PM
> > *To:* users@dpdk.org
> > *Subject:* Enable RSS for virtio application ( dpdk version 21.11)
> >
> >
> >
> > *External email: Use caution opening links or attachments*
> >
> >
> >
> > I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based
> > application.
> > Application is failing during rte_eth_dev_configure . For our application,
> > RSS support is required.
> >
> > eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS;
> > static uint8_t hashKey[] = {
> >             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> >             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> >             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> >             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> >             0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
> >         };
> >
> >         eth_config.rx_adv_conf.rss_conf.rss_key = hashKey;
> >         eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey);
> > eth_config.rx_adv_conf.rss_conf.rss_hf = 260
> >
> >
> >
> > With the aforementioned RSS configuration, the application is not coming
> > up. The same application runs without any issues on a VMware virtual
> > machine.
> >
> > When I set
> >
> >     eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE
> > eth_config.rx_adv_conf.rss_conf.rss_hf = 0
> >
> > Application starts working fine. Since we need RSS support for our
> > application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE.
> >
> > I looked at the DPDK 21.11 release notes, and it mentions that virtio_net
> > supports RSS support.
> >
> >
> > In this application traffic is tapped to capture port. I have also created
> > two queues using ACLI comments.
> >
> > <acropolis> vm.nic_create nutms1-ms type=kNetworkFunctionNic
> > network_function_nic_type=kTap queues=2
> >
> > <acropolis> vm.nic_get testvm
> > xx:xx:xx:xx:xx:xx {
> >   mac_addr: "xx:xx:xx:xx:xx:xx"
> >   network_function_nic_type: "kTap"
> >   network_type: "kNativeNetwork"
> >   queues: 2
> >   type: "kNetworkFunctionNic"
> >   uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef"
> > }
> >
> >
> > Additionally, I've turned on dpdk logging. PFB the dpdk log's output.
> >
> > EAL: PCI device 0000:00:05.0 on NUMA socket 0
> > EAL:   probe driver: 1af4:1000 net_virtio
> > EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket
> > 0)
> > EAL:   PCI memory mapped at 0x940000000
> > EAL:   PCI memory mapped at 0x940001000
> > virtio_read_caps(): [98] skipping non VNDR cap id: 11
> > virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0
> > virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4096
> > virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096
> > virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096
> > virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096
> > virtio_read_caps(): found modern virtio pci device.
> > virtio_read_caps(): common cfg mapped at: 0x940001000
> > virtio_read_caps(): device cfg mapped at: 0x940003000
> > virtio_read_caps(): isr cfg mapped at: 0x940002000
> > virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
> > vtpci_init(): modern virtio pci detected.
> > virtio_ethdev_negotiate_features(): guest_features before negotiate =
> > 8000005f10ef8028
> > virtio_ethdev_negotiate_features(): host_features before negotiate =
> > 130ffffa7
> > virtio_ethdev_negotiate_features(): features after negotiate = 110ef8020
> > virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
> > virtio_init_device(): link speed = -1, duplex = 1
> > virtio_init_device(): config->max_virtqueue_pairs=2
> > virtio_init_device(): config->status=1
> > virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
> > virtio_init_queue(): setting up queue: 0 on NUMA node 0
> > virtio_init_queue(): vq_size: 256
> > virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> > virtio_init_queue(): vq->vq_ring_mem: 0x7fffab000
> > virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffab000
> > virtio_init_vring():  >>
> > modern_setup_queue(): queue 0 addresses:
> > modern_setup_queue():    desc_addr: 7fffab000
> > modern_setup_queue():    aval_addr: 7fffac000
> > modern_setup_queue():    used_addr: 7fffad000
> > modern_setup_queue():    notify addr: 0x940004000 (notify offset: 0)
> > virtio_init_queue(): setting up queue: 1 on NUMA node 0
> > virtio_init_queue(): vq_size: 256
> > virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> > virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000
> > virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffa6000
> > virtio_init_vring():  >>
> > modern_setup_queue(): queue 1 addresses:
> > modern_setup_queue():    desc_addr: 7fffa6000
> > modern_setup_queue():    aval_addr: 7fffa7000
> > modern_setup_queue():    used_addr: 7fffa8000
> > modern_setup_queue():    notify addr: 0x940004004 (notify offset: 1)
> > virtio_init_queue(): setting up queue: 2 on NUMA node 0
> > virtio_init_queue(): vq_size: 256
> > virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> > virtio_init_queue(): vq->vq_ring_mem: 0x7fff98000
> > virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000
> > virtio_init_vring():  >>
> > modern_setup_queue(): queue 2 addresses:
> > modern_setup_queue():    desc_addr: 7fff98000
> > modern_setup_queue():    aval_addr: 7fff99000
> > modern_setup_queue():    used_addr: 7fff9a000
> > modern_setup_queue():    notify addr: 0x940004008 (notify offset: 2)
> > virtio_init_queue(): setting up queue: 3 on NUMA node 0
> > virtio_init_queue(): vq_size: 256
> > virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
> > virtio_init_queue(): vq->vq_ring_mem: 0x7fff93000
> > virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff93000
> > virtio_init_vring():  >>
> > modern_setup_queue(): queue 3 addresses:
> > modern_setup_queue():    desc_addr: 7fff93000
> > modern_setup_queue():    aval_addr: 7fff94000
> > modern_setup_queue():    used_addr: 7fff95000
> > modern_setup_queue():    notify addr: 0x94000400c (notify offset: 3)
> > virtio_init_queue(): setting up queue: 4 on NUMA node 0
> > virtio_init_queue(): vq_size: 64
> > virtio_init_queue(): vring_size: 4612, rounded_vring_size: 8192
> > virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000
> > virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff87000
> > virtio_init_vring():  >>
> > modern_setup_queue(): queue 4 addresses:
> > modern_setup_queue():    desc_addr: 7fff87000
> > modern_setup_queue():    aval_addr: 7fff87400
> > modern_setup_queue():    used_addr: 7fff88000
> > modern_setup_queue():    notify addr: 0x940004010 (notify offset: 4)
> > eth_virtio_pci_init(): port 0 vendorID=0x1af4 deviceID=0x1000
> > EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
> > EAL: lib.telemetry log level changed from disabled to debug
> > TELEMETRY: Attempting socket bind to path
> > '/var/run/dpdk/rte/dpdk_telemetry.v2'
> > TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dpdk_telemetry.v2'
> > failed.
> > TELEMETRY: Attempting unlink and retrying bind
> > TELEMETRY: Socket creation and binding ok
> > TELEMETRY: Telemetry initialized ok
> > TELEMETRY: No legacy callbacks, legacy socket not created
> > [Wed Jul 26 04:44:42 2023][ms_dpi: 28098] DPDK Initialised
> > [Wed Jul 26 04:44:42 2023][ms_dpi: 28098] Finished DPDK logging session
> >
> >
> > The following result is produced when testpmd runs the RSS configuration
> > command.
> >
> > testpmd> port config all rss all
> > Port 0 modified RSS hash function based on hardware
> > support,requested:0x17f83fffc configured:0
> > Multi-queue RSS mode isn't enabled.
> > Configuration of RSS hash at ethernet port 0 failed with error (95):
> > Operation not supported.
> >
> >
> > Any suggestions on how to enable RSS support in this situation would be
> > greatly appreciated.
> >
> > Thank you for your assistance.
> >
> 






^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-07-28 14:08 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-26  5:05 Enable RSS for virtio application ( dpdk version 21.11) shiv chittora
2023-07-26  7:03 ` Bing Zhao
2023-07-26  7:32   ` shiv chittora
2023-07-28 14:08     ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).