From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9376A0032; Fri, 22 Jul 2022 16:58:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BCB4D4021E; Fri, 22 Jul 2022 16:58:01 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id C8B2D40156 for ; Fri, 22 Jul 2022 16:57:59 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id B1006A00C2; Fri, 22 Jul 2022 16:57:59 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [Bug 1057] Unable to use flow rules on VF Date: Fri, 22 Jul 2022 14:57:58 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: ethdev X-Bugzilla-Version: 21.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: hrvoje.habjanic@zg.ht.hr X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone attachments.created Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org https://bugs.dpdk.org/show_bug.cgi?id=3D1057 Bug ID: 1057 Summary: Unable to use flow rules on VF Product: DPDK Version: 21.11 Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: normal Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: hrvoje.habjanic@zg.ht.hr Target Milestone: --- Created attachment 214 --> https://bugs.dpdk.org/attachment.cgi?id=3D214&action=3Dedit rte_config file Hi. I'm trying to use rte_flow API to do some packet steering to different queu= es, but to no avail. Important note here is that i'm working with VF (SR-IOV) inside VM. VM is Ubuntu 20.04. DPDK used is 21.11.1 (statically compiled). Tests are done with testpmd application. I did test the same rule with following DPDK drivers: ixgbevf - NOT WORKING (x520) mlx5 - WORKING iavf - NOT WORKING (xxv710, xl710, e810). Cards used are: xx:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) Subsystem: Intel Corporation Ethernet Server Adapter X520-2 xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 f= or 25GbE SFP28 (rev 02) Subsystem: Hewlett Packard Enterprise Ethernet 10/25/Gb 2-port 661S= FP28 Adapter xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02) Subsystem: Intel Corporation Ethernet Converged Network Adapter XL710-Q2 xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C f= or QSFP (rev 02) Subsystem: Intel Corporation Ethernet Network Adapter E810-C-Q2 Cards behave the same in different servers, but i did try it on sandybridge, skylake and cascadelake architecture. Drivers used (and versions): ixgbe-5.12.5 ixgbevf-4.12.4 i40e-2.18.9 ice-1.7.16 iavf-4.2.7 Drivers versions are the same in host and guest (VM). Main question here is - do those cards support rte_flow API applied over VF port? If they do, what is wrong here? If needed i can provide more details. Here is an example: # /tmp/dpdk-testpmd -c 0xf -n 4 -a 00:05.0 -- -i --rxq=3D4 --txq=3D4 EAL: Detected CPU lcores: 4 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No available 1048576 kB hugepages reported EAL: VFIO support initialized EAL: Using IOMMU type 8 (No-IOMMU) EAL: Probe PCI driver: net_iavf (8086:154c) device: 0000:00:05.0 (socket 0) TELEMETRY: No legacy callbacks, legacy socket not created Interactive-mode selected testpmd: create a new mbuf pool : n=3D171456, size=3D2176, socke= t=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port= will pair with itself. Configuring Port 0 (socket 0) iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[= 1] in Queue[0] iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[= 1] in Queue[1] iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[= 1] in Queue[2] iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[= 1] in Queue[3] Port 0: link state change event Port 0: link state change event Port 0: link state change event Port 0: link state change event Port 0: link state change event Port 0: link state change event Port 0: link state change event Port 0: link state change event Port 0: 26:03:E7:02:8C:AB Checking link statuses... Done testpmd> show port info all ********************* Infos for port 0 ********************* MAC address: 26:03:E7:02:8C:AB Device name: 00:05.0 Driver name: net_iavf Firmware-version: not available Devargs:=20 Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 25 Gbps Link duplex: full-duplex Autoneg status: On MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 64 Maximum number of MAC addresses of hash filtering: 0 VLAN offload:=20 strip off, filter off, extend off, qinq strip off Hash key size in bytes: 52 Redirection table size: 64 Supported RSS offload flow types: ipv4 ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other ipv6 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other Minimum size of RX buffer: 1024 Maximum configurable length of RX packet: 9728 Maximum configurable size of LRO aggregated packet: 0 Current number of RX queues: 4 Max possible RX queues: 256 Max possible number of RXDs per queue: 4096 Min possible number of RXDs per queue: 64 RXDs number alignment: 32 Current number of TX queues: 4 Max possible TX queues: 256 Max possible number of TXDs per queue: 4096 Min possible number of TXDs per queue: 64 TXDs number alignment: 32 Max segment number per packet: 0 Max segment number per MTU/TSO: 0 Device capabilities: 0x0( ) testpmd> flow create 0 ingress pattern ipv4 dst is 192.168.0.5 / end actions queue index 1 / end iavf_flow_create(): Failed to create flow port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed = to create parser engine.: Invalid argument testpmd>=20 Attached is rte_config.h. H. --=20 You are receiving this mail because: You are the assignee for the bug.=