DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [Bug 1057] Unable to use flow rules on VF
Date: Fri, 22 Jul 2022 14:57:58 +0000	[thread overview]
Message-ID: <bug-1057-3@http.bugs.dpdk.org/> (raw)

https://bugs.dpdk.org/show_bug.cgi?id=1057

            Bug ID: 1057
           Summary: Unable to use flow rules on VF
           Product: DPDK
           Version: 21.11
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: hrvoje.habjanic@zg.ht.hr
  Target Milestone: ---

Created attachment 214
  --> https://bugs.dpdk.org/attachment.cgi?id=214&action=edit
rte_config file

Hi.

I'm trying to use rte_flow API to do some packet steering to different queues,
but to no avail.

Important note here is that i'm working with VF (SR-IOV) inside VM. VM is
Ubuntu 20.04.

DPDK used is 21.11.1 (statically compiled). Tests are done with testpmd
application.

I did test the same rule with following DPDK drivers:
ixgbevf - NOT WORKING (x520)
mlx5 - WORKING
iavf - NOT WORKING (xxv710, xl710, e810).

Cards used are:

xx:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter X520-2

xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for
25GbE SFP28 (rev 02)
        Subsystem: Hewlett Packard Enterprise Ethernet 10/25/Gb 2-port 661SFP28
Adapter

xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for
40GbE QSFP+ (rev 02)
        Subsystem: Intel Corporation Ethernet Converged Network Adapter
XL710-Q2

xx:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for
QSFP (rev 02)
        Subsystem: Intel Corporation Ethernet Network Adapter E810-C-Q2

Cards behave the same in different servers, but i did try it on sandybridge,
skylake and cascadelake architecture.

Drivers used (and versions):

ixgbe-5.12.5
ixgbevf-4.12.4
i40e-2.18.9
ice-1.7.16
iavf-4.2.7

Drivers versions are the same in host and guest (VM).

Main question here is - do those cards support rte_flow API applied over VF
port? If they do, what is wrong here? If needed i can provide more details.

Here is an example:

# /tmp/dpdk-testpmd -c 0xf -n 4 -a 00:05.0 -- -i --rxq=4 --txq=4
EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_iavf (8086:154c) device: 0000:00:05.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.

Configuring Port 0 (socket 0)
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1]
in Queue[0]
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1]
in Queue[1]
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1]
in Queue[2]
iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1]
in Queue[3]

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event

Port 0: link state change event
Port 0: 26:03:E7:02:8C:AB
Checking link statuses...
Done
testpmd> show port info all

********************* Infos for port 0  *********************
MAC address: 26:03:E7:02:8C:AB
Device name: 00:05.0
Driver name: net_iavf
Firmware-version: not available
Devargs: 
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 25 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 52
Redirection table size: 64
Supported RSS offload flow types:
  ipv4
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-sctp
  ipv4-other
  ipv6
  ipv6-frag
  ipv6-tcp
  ipv6-udp
  ipv6-sctp
  ipv6-other
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 4
Max possible RX queues: 256
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 64
RXDs number alignment: 32
Current number of TX queues: 4
Max possible TX queues: 256
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 64
TXDs number alignment: 32
Max segment number per packet: 0
Max segment number per MTU/TSO: 0
Device capabilities: 0x0( )
testpmd> flow create 0 ingress pattern ipv4 dst is 192.168.0.5 / end actions
queue index 1 / end
iavf_flow_create(): Failed to create flow
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)): Failed to
create parser engine.: Invalid argument
testpmd> 

Attached is rte_config.h.

H.

-- 
You are receiving this mail because:
You are the assignee for the bug.

                 reply	other threads:[~2022-07-22 14:58 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-1057-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).