From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C773342F4B for ; Wed, 26 Jul 2023 07:05:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8683940ED8; Wed, 26 Jul 2023 07:05:30 +0200 (CEST) Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by mails.dpdk.org (Postfix) with ESMTP id A8E8640EDF for ; Wed, 26 Jul 2023 07:05:28 +0200 (CEST) Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-26837895fbbso1025778a91.3 for ; Tue, 25 Jul 2023 22:05:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690347927; x=1690952727; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=rcTcSE6Hla64FtGLnW+2ZEQtrN+6Rt2I+zffm42V2Ik=; b=UO3P7/BC3z0hkrObjR/1vCtg2LPd93Sz8khH6p5e6zcTK950Bnd29MNqCXJZBeupaw Hi3oZ4+Bi9BzoVlPVYG5myfbV7IdqszrE7vRNwmgHLxTtMEy0uCci1Fd3hx0Ls46MV7P y+howa3BdjGyMSjB2JRIteXjSe0L8snN42mIaU5i0nLH9fK1mQiLI1bMfZd4Ojzr1G0u EOqMDC0I0GHHxqEZ2Zcy0FPNwbRCSas6+Vytw0OBrs8NrW1IUa3wNqA93pAbRHEkc/NF 0q2aMAtiSX7xwQVr4WGzJ+P/++buzADZ06Nm9UVfwdjT/dKfuc7rjoFdPCXZYojpL4A0 +fCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690347927; x=1690952727; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=rcTcSE6Hla64FtGLnW+2ZEQtrN+6Rt2I+zffm42V2Ik=; b=KEeIKGMeRmyTMy1gGAtnVj73qGz0FBe6CfdkbHSqQuW75IwsJwxUs6PQrO/PdJJgCk YgV/drQxeWdCb7exOsyJeuAZGX2mJhJmCotfdc4xT5f5ZDY3KzL3Wpi59fEWEQfeit0F mfw7lM5fHXKTq5zns0Ls6a5DwG+bsgiXFktDr7KBs8QBsxFowqklcRVWjpqfRLCGgHWD s6C5+8aL/saC35MDESx+XeNWAbwhWBTPMqmPl7iBFbZm+yxfW6PRtwakPGUTuCJXZLTf tcI3tdOFkqs10ibSkkiu0VYHubR1KHfGHAFJY+8ig436SWxyITLLjYv2ITCuKxWMr70W DGjw== X-Gm-Message-State: ABy/qLbDGHzwzIgR2Jwsyy1t1DoS+xKia4AmG8nlsjKOI2l898JOolVF q9dJs/K6kDG729lZ269tlm00g0VJ/UDoornfrWu/7yKQ8fw= X-Google-Smtp-Source: APBJJlGn1CzGZAdtLfFR/Pdh7MhV5XiUCVi7B940M5z9Ce80a4C0iVPVa+gO/7wSvi1cEdHDTWcJ7ZFfE61l+D4OTWI= X-Received: by 2002:a17:90a:9f94:b0:267:ed84:c74d with SMTP id o20-20020a17090a9f9400b00267ed84c74dmr999269pjp.22.1690347927388; Tue, 25 Jul 2023 22:05:27 -0700 (PDT) MIME-Version: 1.0 From: shiv chittora Date: Wed, 26 Jul 2023 10:35:16 +0530 Message-ID: Subject: Enable RSS for virtio application ( dpdk version 21.11) To: users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000c26cd506015ccbf1" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000c26cd506015ccbf1 Content-Type: text/plain; charset="UTF-8" I'm using a Nutanix virtual machine to run a DPDK(Version 21.11)-based application. Application is failing during rte_eth_dev_configure . For our application, RSS support is required. eth_config.rxmode.mq_mode = ETH_MQ_RX_RSS; static uint8_t hashKey[] = { 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, }; eth_config.rx_adv_conf.rss_conf.rss_key = hashKey; eth_config.rx_adv_conf.rss_conf.rss_key_len = sizeof(hashKey); eth_config.rx_adv_conf.rss_conf.rss_hf = 260 With the aforementioned RSS configuration, the application is not coming up. The same application runs without any issues on a VMware virtual machine. When I set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE eth_config.rx_adv_conf.rss_conf.rss_hf = 0 Application starts working fine. Since we need RSS support for our application I cannot set eth_config.rxmode.mq_mode = ETH_MQ_RX_NONE. I looked at the DPDK 21.11 release notes, and it mentions that virtio_net supports RSS support. In this application traffic is tapped to capture port. I have also created two queues using ACLI comments. vm.nic_create nutms1-ms type=kNetworkFunctionNic network_function_nic_type=kTap queues=2 vm.nic_get testvm xx:xx:xx:xx:xx:xx { mac_addr: "xx:xx:xx:xx:xx:xx" network_function_nic_type: "kTap" network_type: "kNativeNetwork" queues: 2 type: "kNetworkFunctionNic" uuid: "9c26c704-bcb3-4483-bdaf-4b64bb9233ef" } Additionally, I've turned on dpdk logging. PFB the dpdk log's output. EAL: PCI device 0000:00:05.0 on NUMA socket 0 EAL: probe driver: 1af4:1000 net_virtio EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket 0) EAL: PCI memory mapped at 0x940000000 EAL: PCI memory mapped at 0x940001000 virtio_read_caps(): [98] skipping non VNDR cap id: 11 virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0 virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4096 virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096 virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096 virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096 virtio_read_caps(): found modern virtio pci device. virtio_read_caps(): common cfg mapped at: 0x940001000 virtio_read_caps(): device cfg mapped at: 0x940003000 virtio_read_caps(): isr cfg mapped at: 0x940002000 virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4 vtpci_init(): modern virtio pci detected. virtio_ethdev_negotiate_features(): guest_features before negotiate = 8000005f10ef8028 virtio_ethdev_negotiate_features(): host_features before negotiate = 130ffffa7 virtio_ethdev_negotiate_features(): features after negotiate = 110ef8020 virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62 virtio_init_device(): link speed = -1, duplex = 1 virtio_init_device(): config->max_virtqueue_pairs=2 virtio_init_device(): config->status=1 virtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62 virtio_init_queue(): setting up queue: 0 on NUMA node 0 virtio_init_queue(): vq_size: 256 virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288 virtio_init_queue(): vq->vq_ring_mem: 0x7fffab000 virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffab000 virtio_init_vring(): >> modern_setup_queue(): queue 0 addresses: modern_setup_queue(): desc_addr: 7fffab000 modern_setup_queue(): aval_addr: 7fffac000 modern_setup_queue(): used_addr: 7fffad000 modern_setup_queue(): notify addr: 0x940004000 (notify offset: 0) virtio_init_queue(): setting up queue: 1 on NUMA node 0 virtio_init_queue(): vq_size: 256 virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288 virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000 virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ffa6000 virtio_init_vring(): >> modern_setup_queue(): queue 1 addresses: modern_setup_queue(): desc_addr: 7fffa6000 modern_setup_queue(): aval_addr: 7fffa7000 modern_setup_queue(): used_addr: 7fffa8000 modern_setup_queue(): notify addr: 0x940004004 (notify offset: 1) virtio_init_queue(): setting up queue: 2 on NUMA node 0 virtio_init_queue(): vq_size: 256 virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288 virtio_init_queue(): vq->vq_ring_mem: 0x7fff98000 virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000 virtio_init_vring(): >> modern_setup_queue(): queue 2 addresses: modern_setup_queue(): desc_addr: 7fff98000 modern_setup_queue(): aval_addr: 7fff99000 modern_setup_queue(): used_addr: 7fff9a000 modern_setup_queue(): notify addr: 0x940004008 (notify offset: 2) virtio_init_queue(): setting up queue: 3 on NUMA node 0 virtio_init_queue(): vq_size: 256 virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288 virtio_init_queue(): vq->vq_ring_mem: 0x7fff93000 virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff93000 virtio_init_vring(): >> modern_setup_queue(): queue 3 addresses: modern_setup_queue(): desc_addr: 7fff93000 modern_setup_queue(): aval_addr: 7fff94000 modern_setup_queue(): used_addr: 7fff95000 modern_setup_queue(): notify addr: 0x94000400c (notify offset: 3) virtio_init_queue(): setting up queue: 4 on NUMA node 0 virtio_init_queue(): vq_size: 64 virtio_init_queue(): vring_size: 4612, rounded_vring_size: 8192 virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000 virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff87000 virtio_init_vring(): >> modern_setup_queue(): queue 4 addresses: modern_setup_queue(): desc_addr: 7fff87000 modern_setup_queue(): aval_addr: 7fff87400 modern_setup_queue(): used_addr: 7fff88000 modern_setup_queue(): notify addr: 0x940004010 (notify offset: 4) eth_virtio_pci_init(): port 0 vendorID=0x1af4 deviceID=0x1000 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) EAL: lib.telemetry log level changed from disabled to debug TELEMETRY: Attempting socket bind to path '/var/run/dpdk/rte/dpdk_telemetry.v2' TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dpdk_telemetry.v2' failed. TELEMETRY: Attempting unlink and retrying bind TELEMETRY: Socket creation and binding ok TELEMETRY: Telemetry initialized ok TELEMETRY: No legacy callbacks, legacy socket not created [Wed Jul 26 04:44:42 2023][ms_dpi: 28098] DPDK Initialised [Wed Jul 26 04:44:42 2023][ms_dpi: 28098] Finished DPDK logging session The following result is produced when testpmd runs the RSS configuration command. testpmd> port config all rss all Port 0 modified RSS hash function based on hardware support,requested:0x17f83fffc configured:0 Multi-queue RSS mode isn't enabled. Configuration of RSS hash at ethernet port 0 failed with error (95): Operation not supported. Any suggestions on how to enable RSS support in this situation would be greatly appreciated. Thank you for your assistance. --000000000000c26cd506015ccbf1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I'm using a Nutanix virtual machine to run a DPDK(Vers= ion 21.11)-based application.
Application is failing during rte_eth_dev= _configure . For our application, RSS support is required.

eth_conf= ig.rxmode.mq_mode =3D ETH_MQ_RX_RSS;
static uint8_t hashKey[] =3D {
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5= A, 0x6D, 0x5A,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x6D, 0x5A, 0x6= D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
=C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5A,
= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x6D, 0x5A, 0x6D, 0x5A, 0x6D, 0x5= A, 0x6D, 0x5A,
=C2=A0 =C2=A0 =C2=A0 =C2=A0 };

=C2=A0 =C2=A0 =C2= =A0 =C2=A0 eth_config.rx_adv_conf.rss_conf.rss_key =3D hashKey;
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 eth_config.rx_adv_conf.rss_conf.rss_key_len =3D sizeof= (hashKey);
eth_config.rx_adv_conf.rss_conf.rss_hf =3D 260

=

With the aforementioned RSS configuration, the application is not = coming up. The same application runs without any issues on a VMware virtual= machine. =C2=A0

When I set

=C2=A0 =C2=A0 eth_config.rxmode= .mq_mode =3D ETH_MQ_RX_NONE
eth_config.rx_adv_conf.rss_conf.rss_hf =3D = 0

Application starts working fine. Since we need RSS support for ou= r application I cannot set eth_config.rxmode.mq_mode =3D ETH_MQ_RX_NONE.
I looked at the DPDK 21.11 release notes, and it mentions that virtio= _net supports RSS support.


In this application traffic is tappe= d to capture port. I have also created two queues using ACLI comments. =C2= =A0

<acropolis> vm.nic_create nutms1-ms type=3DkNetworkFunctio= nNic network_function_nic_type=3DkTap queues=3D2

<acropolis> v= m.nic_get testvm
xx:xx:xx:xx:xx:xx {
=C2=A0 mac_addr: "xx:xx:xx:= xx:xx:xx"
=C2=A0 network_function_nic_type: "kTap"
=C2= =A0 network_type: "kNativeNetwork"
=C2=A0 queues: 2
=C2=A0 = type: "kNetworkFunctionNic"
=C2=A0 uuid: "9c26c704-bcb3-4= 483-bdaf-4b64bb9233ef"
}


Additionally, I've turned= on dpdk logging. PFB the dpdk log's output.

EAL: PCI device 00= 00:00:05.0 on NUMA socket 0
EAL: =C2=A0 probe driver: 1af4:1000 net_virt= io
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (s= ocket 0)
EAL: =C2=A0 PCI memory mapped at 0x940000000
EAL: =C2=A0 PCI= memory mapped at 0x940001000
virtio_read_caps(): [98] skipping non VNDR= cap id: 11
virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, = len: 0
virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: = 4096
virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 40= 96
virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096=
virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096virtio_read_caps(): found modern virtio pci device.
virtio_read_caps()= : common cfg mapped at: 0x940001000
virtio_read_caps(): device cfg mappe= d at: 0x940003000
virtio_read_caps(): isr cfg mapped at: 0x940002000
= virtio_read_caps(): notify base: 0x940004000, notify off multiplier: 4
v= tpci_init(): modern virtio pci detected.
virtio_ethdev_negotiate_feature= s(): guest_features before negotiate =3D 8000005f10ef8028
virtio_ethdev_= negotiate_features(): host_features before negotiate =3D 130ffffa7
virti= o_ethdev_negotiate_features(): features after negotiate =3D 110ef8020
vi= rtio_init_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_device(): li= nk speed =3D -1, duplex =3D 1
virtio_init_device(): config->max_virtq= ueue_pairs=3D2
virtio_init_device(): config->status=3D1
virtio_ini= t_device(): PORT MAC: 50:6B:8D:A9:09:62
virtio_init_queue(): setting up = queue: 0 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_init= _queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue= (): vq->vq_ring_mem: 0x7fffab000
virtio_init_queue(): vq->vq_ring_= virt_mem: 0x17ffab000
virtio_init_vring(): =C2=A0>>
modern_setu= p_queue(): queue 0 addresses:
modern_setup_queue(): =C2=A0 =C2=A0desc_ad= dr: 7fffab000
modern_setup_queue(): =C2=A0 =C2=A0aval_addr: 7fffac000modern_setup_queue(): =C2=A0 =C2=A0used_addr: 7fffad000
modern_setup_qu= eue(): =C2=A0 =C2=A0notify addr: 0x940004000 (notify offset: 0)
virtio_i= nit_queue(): setting up queue: 1 on NUMA node 0
virtio_init_queue(): vq_= size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12= 288
virtio_init_queue(): vq->vq_ring_mem: 0x7fffa6000
virtio_init_= queue(): vq->vq_ring_virt_mem: 0x17ffa6000
virtio_init_vring(): =C2= =A0>>
modern_setup_queue(): queue 1 addresses:
modern_setup_que= ue(): =C2=A0 =C2=A0desc_addr: 7fffa6000
modern_setup_queue(): =C2=A0 =C2= =A0aval_addr: 7fffa7000
modern_setup_queue(): =C2=A0 =C2=A0used_addr: 7f= ffa8000
modern_setup_queue(): =C2=A0 =C2=A0notify addr: 0x940004004 (not= ify offset: 1)
virtio_init_queue(): setting up queue: 2 on NUMA node 0virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 102= 44, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem: 0= x7fff98000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x17ff98000
= virtio_init_vring(): =C2=A0>>
modern_setup_queue(): queue 2 addres= ses:
modern_setup_queue(): =C2=A0 =C2=A0desc_addr: 7fff98000
modern_s= etup_queue(): =C2=A0 =C2=A0aval_addr: 7fff99000
modern_setup_queue(): = =C2=A0 =C2=A0used_addr: 7fff9a000
modern_setup_queue(): =C2=A0 =C2=A0not= ify addr: 0x940004008 (notify offset: 2)
virtio_init_queue(): setting up= queue: 3 on NUMA node 0
virtio_init_queue(): vq_size: 256
virtio_ini= t_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queu= e(): vq->vq_ring_mem: 0x7fff93000
virtio_init_queue(): vq->vq_ring= _virt_mem: 0x17ff93000
virtio_init_vring(): =C2=A0>>
modern_set= up_queue(): queue 3 addresses:
modern_setup_queue(): =C2=A0 =C2=A0desc_a= ddr: 7fff93000
modern_setup_queue(): =C2=A0 =C2=A0aval_addr: 7fff94000modern_setup_queue(): =C2=A0 =C2=A0used_addr: 7fff95000
modern_setup_q= ueue(): =C2=A0 =C2=A0notify addr: 0x94000400c (notify offset: 3)
virtio_= init_queue(): setting up queue: 4 on NUMA node 0
virtio_init_queue(): vq= _size: 64
virtio_init_queue(): vring_size: 4612, rounded_vring_size: 819= 2
virtio_init_queue(): vq->vq_ring_mem: 0x7fff87000
virtio_init_qu= eue(): vq->vq_ring_virt_mem: 0x17ff87000
virtio_init_vring(): =C2=A0&= gt;>
modern_setup_queue(): queue 4 addresses:
modern_setup_queue()= : =C2=A0 =C2=A0desc_addr: 7fff87000
modern_setup_queue(): =C2=A0 =C2=A0a= val_addr: 7fff87400
modern_setup_queue(): =C2=A0 =C2=A0used_addr: 7fff88= 000
modern_setup_queue(): =C2=A0 =C2=A0notify addr: 0x940004010 (notify = offset: 4)
eth_virtio_pci_init(): port 0 vendorID=3D0x1af4 deviceID=3D0x= 1000
EAL: Module /sys/module/vfio not found! error 2 (No such file or di= rectory)
EAL: lib.telemetry log level changed from disabled to debug
= TELEMETRY: Attempting socket bind to path '/var/run/dpdk/rte/dpdk_telem= etry.v2'
TELEMETRY: Initial bind to socket '/var/run/dpdk/rte/dp= dk_telemetry.v2' failed.
TELEMETRY: Attempting unlink and retrying b= ind
TELEMETRY: Socket creation and binding ok
TELEMETRY: Telemetry in= itialized ok
TELEMETRY: No legacy callbacks, legacy socket not created[Wed Jul 26 04:44:42 2023][ms_dpi: 28098] DPDK Initialised
[Wed Jul 26= 04:44:42 2023][ms_dpi: 28098] Finished DPDK logging session


The= following result is produced when testpmd runs the RSS configuration comma= nd.

testpmd> port config all rss all
Port 0 modified RSS hash= function based on hardware support,requested:0x17f83fffc configured:0
M= ulti-queue RSS mode isn't enabled.
Configuration of RSS hash at ethe= rnet port 0 failed with error (95): Operation not supported.


Any= suggestions on how to enable RSS support in this situation would be greatl= y appreciated.

Thank you for your assistance.=C2=A0
--000000000000c26cd506015ccbf1--