From: Sruthi Yellamraju <ysruthi@gmail.com>
To: users@dpdk.org
Subject: [dpdk-users] test_pmd :: flow filtering not working as expected on Intel NIC
Date: Thu, 6 Jun 2019 21:04:01 -0400 [thread overview]
Message-ID: <CAFxVZ5LiZxJFUo_RcQfDGP_dR7TUVLvU=BH+qNb9ONxaDGDxuA@mail.gmail.com> (raw)
Hello,
I am trying to test the rte_flow API using the testpmd application. Mainly
I start traffic forwarding, then set a rte_flow rule to drop all traffic,
then check stats to see if traffic is dropped. Tried many different
combinations but traffic is not being dropped.
- Any thoughts on how I can get a drop rule to work using testpmd?
- Also, is there a list of Intel NICs that support rte_flow with
descriptions of what rte_flow features they support?
My NIC is an Intel X710.
My steps:
*(1) Start testpmd:*
$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL: probe driver: 8086:1572 net_i40e
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:08:00.1 on NUMA socket 0
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=587456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=587456, size=2176,
socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 3C:FD:FE:CE:EC:54
Configuring Port 1 (socket 1)
Port 1: 3C:FD:FE:CE:EE:08
Checking link statuses...
Done
testpmd> sta
Port 1: link state change event
Port 0: link state change event
(*2) Start forwarding traffic between two ports*
testpmd>
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
*(3) Check that traffic is flowing: OK*
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 2565456 RX-missed: 5780962 RX-bytes: 4512982511
RX-errors: 2
RX-nombuf: 0
TX-packets: 2562340 TX-errors: 0 TX-bytes: 1383470795
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 2563321 RX-missed: 5470250 RX-bytes: 4360388448
RX-errors: 2
RX-nombuf: 0
TX-packets: 2565053 TX-errors: 0 TX-bytes: 1384794806
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 3738196 RX-missed: 5781554 RX-bytes: 5226453840
RX-errors: 2
RX-nombuf: 0
TX-packets: 3733847 TX-errors: 0 TX-bytes: 2095944031
Throughput (since last show)
Rx-pps: 1146794
Tx-pps: 1145588
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 3734584 RX-missed: 5470836 RX-bytes: 5073167767
RX-errors: 2
RX-nombuf: 0
TX-packets: 3737663 TX-errors: 0 TX-bytes: 2097873710
Throughput (since last show)
Rx-pps: 1145356
Tx-pps: 1146673
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 5065297 RX-missed: 5782844 RX-bytes: 5898773891
RX-errors: 2
RX-nombuf: 0
TX-packets: 5059419 TX-errors: 0 TX-bytes: 2766889330
Throughput (since last show)
Rx-pps: 1214593
Tx-pps: 1213193
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 5060403 RX-missed: 5472138 RX-bytes: 5744847501
RX-errors: 2
RX-nombuf: 0
TX-packets: 5064835 TX-errors: 0 TX-bytes: 2769527917
Throughput (since last show)
Rx-pps: 1213399
Tx-pps: 1214637
############################################################################
*(4) Set filters to drop all eth traffic from both ports*
testpmd> flow create 0 ingress pattern eth / end actions drop / end
Flow rule #0 created
testpmd> flow create 1 ingress pattern eth / end actions drop / end
Flow rule #0 created
*(5) Check whether traffic is dropped. Traffic is actually not being
dropped, so the filter does not seem to work.*
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 22396701 RX-missed: 5807012 RX-bytes: 15827722523
RX-errors: 2
RX-nombuf: 0
TX-packets: 22345743 TX-errors: 0 TX-bytes: 12669400435
Throughput (since last show)
Rx-pps: 1149437
Tx-pps: 1146447
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 22369666 RX-missed: 5496215 RX-bytes: 15660935280
RX-errors: 2
RX-nombuf: 0
TX-packets: 22333043 TX-errors: 0 TX-bytes: 12682032499
Throughput (since last show)
Rx-pps: 1147969
Tx-pps: 1145246
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 25054000 RX-missed: 5810727 RX-bytes: 16926854689
RX-errors: 2
RX-nombuf: 0
TX-packets: 24977043 TX-errors: 0 TX-bytes: 13764321537
Throughput (since last show)
Rx-pps: 1405948
Tx-pps: 1392192
############################################################################
######################## NIC statistics for port 1
########################
RX-packets: 25024585 RX-missed: 5499881 RX-bytes: 16758760036
RX-errors: 2
RX-nombuf: 0
TX-packets: 24966843 TX-errors: 0 TX-bytes: 13778185984
Throughput (since last show)
Rx-pps: 1404664
Tx-pps: 1393490
############################################################################
testpmd>
reply other threads:[~2019-06-07 1:04 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAFxVZ5LiZxJFUo_RcQfDGP_dR7TUVLvU=BH+qNb9ONxaDGDxuA@mail.gmail.com' \
--to=ysruthi@gmail.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).