* [dpdk-users] i40e + rte_flow: Drop filter not working
@ 2019-06-17 2:19 Sruthi Yellamraju
[not found] ` <6DF84BC835AA424FAE2B23EA6A94E5EB669EFD@IRSMSX104.ger.corp.intel.com>
0 siblings, 1 reply; 4+ messages in thread
From: Sruthi Yellamraju @ 2019-06-17 2:19 UTC (permalink / raw)
To: users
Hello,
I am trying to test the rte_flow API using the testpmd application.
Mainly I start traffic forwarding, then set a rte_flow rule to drop
all ethernet traffic, then check stats to see if traffic is dropped.
Tried many different combinations but traffic is not being dropped.
- Any thoughts on how I can get a drop rule to work using testpmd?
- Also, is there a list of Intel NICs that support rte_flow with
descriptions of what rte_flow features they support?
My NIC is an Intel X710. Using DPDK 19.02.
My steps:
*(1) Start testpmd:*
$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i
EAL: Detected 56 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL: probe driver: 8086:1572 net_i40e
EAL: using IOMMU type 1 (Type 1)
EAL: PCI device 0000:08:00.1 on NUMA socket 0
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=587456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=587456, size=2176,
socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0) Port 0: 3C:FD:FE:CE:EC:54
Configuring Port 1 (socket 1) Port 1: 3C:FD:FE:CE:EE:08
Checking link statuses...
Done
testpmd> sta
Port 1: link state change event
Port 0: link state change event
(*2) Start forwarding traffic between two ports*
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x10000
RX queue: 0
RX desc=256 - RX free threshold=32
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=32
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=32
*(3) Check that traffic is flowing: OK*
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 2565456 RX-missed: 5780962 RX-bytes: 4512982511
RX-errors: 2
RX-nombuf: 0
TX-packets: 2562340 TX-errors: 0 TX-bytes: 1383470795
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
####################################################################################################
NIC statistics for port 1
########################
RX-packets: 2563321 RX-missed: 5470250 RX-bytes: 4360388448
RX-errors: 2
RX-nombuf: 0
TX-packets: 2565053 TX-errors: 0 TX-bytes: 1384794806
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 3738196 RX-missed: 5781554 RX-bytes: 5226453840
RX-errors: 2
RX-nombuf: 0
TX-packets: 3733847 TX-errors: 0 TX-bytes: 2095944031
Throughput (since last show)
Rx-pps: 1146794
Tx-pps: 1145588
####################################################################################################
NIC statistics for port 1
########################
RX-packets: 3734584 RX-missed: 5470836 RX-bytes: 5073167767
RX-errors: 2
RX-nombuf: 0
TX-packets: 3737663 TX-errors: 0 TX-bytes: 2097873710
Throughput (since last show)
Rx-pps: 1145356
Tx-pps: 1146673
############################################################################
*(4) Set filters to drop all eth traffic from both ports*
testpmd> flow create 0 ingress pattern eth / end actions drop / end
Flow rule #0 created
testpmd> flow create 1 ingress pattern eth / end actions drop / end
Flow rule #0 created
*(5) Check whether traffic is dropped. Traffic is actually not being
dropped, so the filter does not seem to work.*
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 22396701 RX-missed: 5807012 RX-bytes: 15827722523
RX-errors: 2
RX-nombuf: 0
TX-packets: 22345743 TX-errors: 0 TX-bytes: 12669400435
Throughput (since last show)
Rx-pps: 1149437
Tx-pps: 1146447
####################################################################################################
NIC statistics for port 1
########################
RX-packets: 22369666 RX-missed: 5496215 RX-bytes: 15660935280
RX-errors: 2
RX-nombuf: 0
TX-packets: 22333043 TX-errors: 0 TX-bytes: 12682032499
Throughput (since last show)
Rx-pps: 1147969
Tx-pps: 1145246
############################################################################
testpmd> show port stats all
######################## NIC statistics for port 0
########################
RX-packets: 25054000 RX-missed: 5810727 RX-bytes: 16926854689
RX-errors: 2
RX-nombuf: 0
TX-packets: 24977043 TX-errors: 0 TX-bytes: 13764321537
Throughput (since last show)
Rx-pps: 1405948
Tx-pps: 1392192
####################################################################################################
NIC statistics for port 1
########################
RX-packets: 25024585 RX-missed: 5499881 RX-bytes: 16758760036
RX-errors: 2
RX-nombuf: 0
TX-packets: 24966843 TX-errors: 0 TX-bytes: 13778185984
Throughput (since last show)
Rx-pps: 1404664
Tx-pps: 1393490
############################################################################
Thanks,
Sruthi
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-users] FW: i40e + rte_flow: Drop filter not working
[not found] ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B38A91B@SHSMSX103.ccr.corp.intel.com>
@ 2019-06-19 11:42 ` Ye Xiaolong
2019-06-19 16:27 ` Cliff Burdick
0 siblings, 1 reply; 4+ messages in thread
From: Ye Xiaolong @ 2019-06-19 11:42 UTC (permalink / raw)
To: ysruthi, users
Cc: Zhang, Xiao, Xing, Beilei, Wang, Haiyue, Zhang, Qi Z, Zhang, Helin
>-----Original Message-----
>From: users [mailto:users-bounces@dpdk.org] On Behalf Of Sruthi Yellamraju
>Sent: Monday, June 17, 2019 3:19 AM
>To: users@dpdk.org
>Subject: [dpdk-users] i40e + rte_flow: Drop filter not working
>
>Hello,
>
>I am trying to test the rte_flow API using the testpmd application.
>Mainly I start traffic forwarding, then set a rte_flow rule to drop all ethernet traffic, then check stats to see if traffic is dropped.
>Tried many different combinations but traffic is not being dropped.
>
>- Any thoughts on how I can get a drop rule to work using testpmd?
The rte_flow rule "flow create 0 ingress pattern eth / end actions drop / end"
you used in below test is not supported, it should return an explicit error
when you tried to create it, it's an software bug and we'll fix it.
As for how to drop an available rule in testpmd, you can refer to
https://doc.dpdk.org/guides/howto/rte_flow.html
The flow rule example in it is: flow create 0 ingress pattern eth / vlan / ipv4 dst is 192.168.3.2 / end actions drop / end
Also, the statistic you get from `show port stats 0` is obtained by reading vsi
registers, it will record all packets hardware received even if the packet will
be filtered out by the rte flow. Instead, you can try to `set verbose 1` in testpmd,
then you can observe whether the rte_flow rule takes effect or not.
>- Also, is there a list of Intel NICs that support rte_flow with descriptions of what rte_flow features they support?
>
Unfortunately, there is no such list info available currently, but we'll try to
improve our doc in dpdk repo.
@beilei, @qi, correct me if I am wrong.
Thanks,
Xiaolong
>My NIC is an Intel X710. Using DPDK 19.02.
>My steps:
>*(1) Start testpmd:*
>$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i
>EAL: Detected 56 lcore(s)
>EAL: Detected 2 NUMA nodes
>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>EAL: No free hugepages reported in hugepages-1048576kB
>EAL: Probing VFIO support...
>EAL: VFIO support initialized
>EAL: PCI device 0000:08:00.0 on NUMA socket 0
>EAL: probe driver: 8086:1572 net_i40e
>EAL: using IOMMU type 1 (Type 1)
>EAL: PCI device 0000:08:00.1 on NUMA socket 0
>EAL: probe driver: 8086:1572 net_i40e
>EAL: PCI device 0000:82:00.0 on NUMA socket 1
>EAL: probe driver: 8086:1572 net_i40e
>EAL: PCI device 0000:82:00.1 on NUMA socket 1
>EAL: probe driver: 8086:1572 net_i40e
>Interactive-mode selected
>testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=587456, size=2176,
>socket=0
>testpmd: preferred mempool ops selected: ring_mp_mc
>testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=587456, size=2176,
>socket=1
>testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) Port 0: 3C:FD:FE:CE:EC:54 Configuring Port 1 (socket 1) Port 1: 3C:FD:FE:CE:EE:08 Checking link statuses...
>Done
>testpmd> sta
>Port 1: link state change event
>Port 0: link state change event
>
>(*2) Start forwarding traffic between two ports*
>testpmd> start
>io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards packets on 2 streams:
> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
> RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> io packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=2
> port 0: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x0 Tx offloads=0x10000
> RX queue: 0
> RX desc=256 - RX free threshold=32
> RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> RX Offloads=0x0
> TX queue: 0
> TX desc=256 - TX free threshold=32
> TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=32
> port 1: RX queue number: 1 Tx queue number: 1
> Rx offloads=0x0 Tx offloads=0x10000
> RX queue: 0
> RX desc=256 - RX free threshold=32
> RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> RX Offloads=0x0
> TX queue: 0
> TX desc=256 - TX free threshold=32
> TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> TX offloads=0x0 - TX RS bit threshold=32
>
>*(3) Check that traffic is flowing: OK*
>
>testpmd> show port stats all
> ######################## NIC statistics for port 0 ########################
> RX-packets: 2565456 RX-missed: 5780962 RX-bytes: 4512982511
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 2562340 TX-errors: 0 TX-bytes: 1383470795
> Throughput (since last show)
> Rx-pps: 0
> Tx-pps: 0
>
>####################################################################################################
>NIC statistics for port 1
>########################
> RX-packets: 2563321 RX-missed: 5470250 RX-bytes: 4360388448
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 2565053 TX-errors: 0 TX-bytes: 1384794806
> Throughput (since last show)
> Rx-pps: 0
> Tx-pps: 0
>
>############################################################################
>
>testpmd> show port stats all
> ######################## NIC statistics for port 0 ########################
> RX-packets: 3738196 RX-missed: 5781554 RX-bytes: 5226453840
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 3733847 TX-errors: 0 TX-bytes: 2095944031
> Throughput (since last show)
> Rx-pps: 1146794
> Tx-pps: 1145588
>
>####################################################################################################
>NIC statistics for port 1
>########################
> RX-packets: 3734584 RX-missed: 5470836 RX-bytes: 5073167767
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 3737663 TX-errors: 0 TX-bytes: 2097873710
> Throughput (since last show)
> Rx-pps: 1145356
> Tx-pps: 1146673
>
>############################################################################
>
>
>*(4) Set filters to drop all eth traffic from both ports*
>
>
>testpmd> flow create 0 ingress pattern eth / end actions drop / end
>
>Flow rule #0 created
>
>testpmd> flow create 1 ingress pattern eth / end actions drop / end
>
>Flow rule #0 created
>
>
>*(5) Check whether traffic is dropped. Traffic is actually not being dropped, so the filter does not seem to work.*
>
>
>testpmd> show port stats all
> ######################## NIC statistics for port 0 ########################
> RX-packets: 22396701 RX-missed: 5807012 RX-bytes: 15827722523
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 22345743 TX-errors: 0 TX-bytes: 12669400435
> Throughput (since last show)
> Rx-pps: 1149437
> Tx-pps: 1146447
>
>####################################################################################################
>NIC statistics for port 1
>########################
> RX-packets: 22369666 RX-missed: 5496215 RX-bytes: 15660935280
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 22333043 TX-errors: 0 TX-bytes: 12682032499
> Throughput (since last show)
> Rx-pps: 1147969
> Tx-pps: 1145246
>
>
>############################################################################
>testpmd> show port stats all
> ######################## NIC statistics for port 0 ########################
> RX-packets: 25054000 RX-missed: 5810727 RX-bytes: 16926854689
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 24977043 TX-errors: 0 TX-bytes: 13764321537
> Throughput (since last show)
> Rx-pps: 1405948
> Tx-pps: 1392192
>
>
>####################################################################################################
>NIC statistics for port 1
>########################
> RX-packets: 25024585 RX-missed: 5499881 RX-bytes: 16758760036
> RX-errors: 2
> RX-nombuf: 0
> TX-packets: 24966843 TX-errors: 0 TX-bytes: 13778185984
> Throughput (since last show)
> Rx-pps: 1404664
> Tx-pps: 1393490
>
>############################################################################
>
>Thanks,
>
>Sruthi
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-users] FW: i40e + rte_flow: Drop filter not working
2019-06-19 11:42 ` [dpdk-users] FW: " Ye Xiaolong
@ 2019-06-19 16:27 ` Cliff Burdick
2019-06-21 10:46 ` Sruthi Yellamraju
0 siblings, 1 reply; 4+ messages in thread
From: Cliff Burdick @ 2019-06-19 16:27 UTC (permalink / raw)
To: Ye Xiaolong
Cc: ysruthi, users, Zhang, Xiao, Xing, Beilei, Wang, Haiyue, Zhang,
Qi Z, Zhang, Helin
Xiaolong, maybe it would be useful for every vendor to maintain a matrix of
every flow action/match that's supported and not for each PMD? I know we've
had this problem with Mellanox as well. It's likely only a handful of
vendors right now that support any form of rte_flow, so starting it soon
would be nice. Also, I believe in Mellanox's case there are some rules that
are done in hardware and some that aren't, so it would also be nice to
distinguish between those. Someone from Mellanox can correct me if I'm
wrong on that.
On Tue, Jun 18, 2019 at 10:00 PM Ye Xiaolong <xiaolong.ye@intel.com> wrote:
> >-----Original Message-----
> >From: users [mailto:users-bounces@dpdk.org] On Behalf Of Sruthi
> Yellamraju
> >Sent: Monday, June 17, 2019 3:19 AM
> >To: users@dpdk.org
> >Subject: [dpdk-users] i40e + rte_flow: Drop filter not working
> >
> >Hello,
> >
> >I am trying to test the rte_flow API using the testpmd application.
> >Mainly I start traffic forwarding, then set a rte_flow rule to drop all
> ethernet traffic, then check stats to see if traffic is dropped.
> >Tried many different combinations but traffic is not being dropped.
> >
> >- Any thoughts on how I can get a drop rule to work using testpmd?
>
> The rte_flow rule "flow create 0 ingress pattern eth / end actions drop /
> end"
> you used in below test is not supported, it should return an explicit error
> when you tried to create it, it's an software bug and we'll fix it.
>
> As for how to drop an available rule in testpmd, you can refer to
>
> https://doc.dpdk.org/guides/howto/rte_flow.html
>
> The flow rule example in it is: flow create 0 ingress pattern eth / vlan /
> ipv4 dst is 192.168.3.2 / end actions drop / end
>
> Also, the statistic you get from `show port stats 0` is obtained by
> reading vsi
> registers, it will record all packets hardware received even if the packet
> will
> be filtered out by the rte flow. Instead, you can try to `set verbose 1`
> in testpmd,
> then you can observe whether the rte_flow rule takes effect or not.
>
> >- Also, is there a list of Intel NICs that support rte_flow with
> descriptions of what rte_flow features they support?
> >
>
> Unfortunately, there is no such list info available currently, but we'll
> try to
> improve our doc in dpdk repo.
>
> @beilei, @qi, correct me if I am wrong.
>
> Thanks,
> Xiaolong
>
> >My NIC is an Intel X710. Using DPDK 19.02.
> >My steps:
> >*(1) Start testpmd:*
> >$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i
> >EAL: Detected 56 lcore(s)
> >EAL: Detected 2 NUMA nodes
> >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >EAL: No free hugepages reported in hugepages-1048576kB
> >EAL: Probing VFIO support...
> >EAL: VFIO support initialized
> >EAL: PCI device 0000:08:00.0 on NUMA socket 0
> >EAL: probe driver: 8086:1572 net_i40e
> >EAL: using IOMMU type 1 (Type 1)
> >EAL: PCI device 0000:08:00.1 on NUMA socket 0
> >EAL: probe driver: 8086:1572 net_i40e
> >EAL: PCI device 0000:82:00.0 on NUMA socket 1
> >EAL: probe driver: 8086:1572 net_i40e
> >EAL: PCI device 0000:82:00.1 on NUMA socket 1
> >EAL: probe driver: 8086:1572 net_i40e
> >Interactive-mode selected
> >testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=587456, size=2176,
> >socket=0
> >testpmd: preferred mempool ops selected: ring_mp_mc
> >testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=587456, size=2176,
> >socket=1
> >testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> (socket 0) Port 0: 3C:FD:FE:CE:EC:54 Configuring Port 1 (socket 1) Port 1:
> 3C:FD:FE:CE:EE:08 Checking link statuses...
> >Done
> >testpmd> sta
> >Port 1: link state change event
> >Port 0: link state change event
> >
> >(*2) Start forwarding traffic between two ports*
> >testpmd> start
> >io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
> enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards
> packets on 2 streams:
> > RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
> > RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> > io packet forwarding packets/burst=32
> > nb forwarding cores=1 - nb forwarding ports=2
> > port 0: RX queue number: 1 Tx queue number: 1
> > Rx offloads=0x0 Tx offloads=0x10000
> > RX queue: 0
> > RX desc=256 - RX free threshold=32
> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> > RX Offloads=0x0
> > TX queue: 0
> > TX desc=256 - TX free threshold=32
> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> > TX offloads=0x0 - TX RS bit threshold=32
> > port 1: RX queue number: 1 Tx queue number: 1
> > Rx offloads=0x0 Tx offloads=0x10000
> > RX queue: 0
> > RX desc=256 - RX free threshold=32
> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> > RX Offloads=0x0
> > TX queue: 0
> > TX desc=256 - TX free threshold=32
> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> > TX offloads=0x0 - TX RS bit threshold=32
> >
> >*(3) Check that traffic is flowing: OK*
> >
> >testpmd> show port stats all
> > ######################## NIC statistics for port 0
> ########################
> > RX-packets: 2565456 RX-missed: 5780962 RX-bytes: 4512982511
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 2562340 TX-errors: 0 TX-bytes: 1383470795
> > Throughput (since last show)
> > Rx-pps: 0
> > Tx-pps: 0
> >
>
> >####################################################################################################
> >NIC statistics for port 1
> >########################
> > RX-packets: 2563321 RX-missed: 5470250 RX-bytes: 4360388448
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 2565053 TX-errors: 0 TX-bytes: 1384794806
> > Throughput (since last show)
> > Rx-pps: 0
> > Tx-pps: 0
> >
>
> >############################################################################
> >
> >testpmd> show port stats all
> > ######################## NIC statistics for port 0
> ########################
> > RX-packets: 3738196 RX-missed: 5781554 RX-bytes: 5226453840
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 3733847 TX-errors: 0 TX-bytes: 2095944031
> > Throughput (since last show)
> > Rx-pps: 1146794
> > Tx-pps: 1145588
> >
>
> >####################################################################################################
> >NIC statistics for port 1
> >########################
> > RX-packets: 3734584 RX-missed: 5470836 RX-bytes: 5073167767
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 3737663 TX-errors: 0 TX-bytes: 2097873710
> > Throughput (since last show)
> > Rx-pps: 1145356
> > Tx-pps: 1146673
> >
>
> >############################################################################
> >
> >
> >*(4) Set filters to drop all eth traffic from both ports*
> >
> >
> >testpmd> flow create 0 ingress pattern eth / end actions drop / end
> >
> >Flow rule #0 created
> >
> >testpmd> flow create 1 ingress pattern eth / end actions drop / end
> >
> >Flow rule #0 created
> >
> >
> >*(5) Check whether traffic is dropped. Traffic is actually not being
> dropped, so the filter does not seem to work.*
> >
> >
> >testpmd> show port stats all
> > ######################## NIC statistics for port 0
> ########################
> > RX-packets: 22396701 RX-missed: 5807012 RX-bytes: 15827722523
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 22345743 TX-errors: 0 TX-bytes: 12669400435
> > Throughput (since last show)
> > Rx-pps: 1149437
> > Tx-pps: 1146447
> >
>
> >####################################################################################################
> >NIC statistics for port 1
> >########################
> > RX-packets: 22369666 RX-missed: 5496215 RX-bytes: 15660935280
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 22333043 TX-errors: 0 TX-bytes: 12682032499
> > Throughput (since last show)
> > Rx-pps: 1147969
> > Tx-pps: 1145246
> >
> >
>
> >############################################################################
> >testpmd> show port stats all
> > ######################## NIC statistics for port 0
> ########################
> > RX-packets: 25054000 RX-missed: 5810727 RX-bytes: 16926854689
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 24977043 TX-errors: 0 TX-bytes: 13764321537
> > Throughput (since last show)
> > Rx-pps: 1405948
> > Tx-pps: 1392192
> >
> >
>
> >####################################################################################################
> >NIC statistics for port 1
> >########################
> > RX-packets: 25024585 RX-missed: 5499881 RX-bytes: 16758760036
> > RX-errors: 2
> > RX-nombuf: 0
> > TX-packets: 24966843 TX-errors: 0 TX-bytes: 13778185984
> > Throughput (since last show)
> > Rx-pps: 1404664
> > Tx-pps: 1393490
> >
>
> >############################################################################
> >
> >Thanks,
> >
> >Sruthi
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-users] FW: i40e + rte_flow: Drop filter not working
2019-06-19 16:27 ` Cliff Burdick
@ 2019-06-21 10:46 ` Sruthi Yellamraju
0 siblings, 0 replies; 4+ messages in thread
From: Sruthi Yellamraju @ 2019-06-21 10:46 UTC (permalink / raw)
To: Cliff Burdick
Cc: Ye Xiaolong, users, Zhang, Xiao, Xing, Beilei, Wang, Haiyue,
Zhang, Qi Z, Zhang, Helin
Thank you for your responses Xiaolong. It's very helpful. It confirms our
findings from tests.
Thank you Cliff for your response. Having such a matrix would be *extremely
useful* to the application developer. More so, since rte_flow seems like an
extensive, generic API, while the underlying H/w capabilities could be
severely limited, and vary. We would very much appreciate such info.
Also, how does RSS relate to rte_flow APIs itself? Since RSS distributes
packets based on hashes to queues, and using rte_flow you could distribute
packets too to various queues based on other criteria. Does RSS have to be
disabled for rte_flow to work? Does RSS take precedence over rte_flow? Is
that H/w dependent as well?
Thanks,
Sruthi
On Wed, Jun 19, 2019 at 12:28 PM Cliff Burdick <shaklee3@gmail.com> wrote:
> Xiaolong, maybe it would be useful for every vendor to maintain a matrix
> of every flow action/match that's supported and not for each PMD? I know
> we've had this problem with Mellanox as well. It's likely only a handful of
> vendors right now that support any form of rte_flow, so starting it soon
> would be nice. Also, I believe in Mellanox's case there are some rules that
> are done in hardware and some that aren't, so it would also be nice to
> distinguish between those. Someone from Mellanox can correct me if I'm
> wrong on that.
>
> On Tue, Jun 18, 2019 at 10:00 PM Ye Xiaolong <xiaolong.ye@intel.com>
> wrote:
>
>> >-----Original Message-----
>> >From: users [mailto:users-bounces@dpdk.org] On Behalf Of Sruthi
>> Yellamraju
>> >Sent: Monday, June 17, 2019 3:19 AM
>> >To: users@dpdk.org
>> >Subject: [dpdk-users] i40e + rte_flow: Drop filter not working
>> >
>> >Hello,
>> >
>> >I am trying to test the rte_flow API using the testpmd application.
>> >Mainly I start traffic forwarding, then set a rte_flow rule to drop all
>> ethernet traffic, then check stats to see if traffic is dropped.
>> >Tried many different combinations but traffic is not being dropped.
>> >
>> >- Any thoughts on how I can get a drop rule to work using testpmd?
>>
>> The rte_flow rule "flow create 0 ingress pattern eth / end actions drop /
>> end"
>> you used in below test is not supported, it should return an explicit
>> error
>> when you tried to create it, it's an software bug and we'll fix it.
>>
>> As for how to drop an available rule in testpmd, you can refer to
>>
>> https://doc.dpdk.org/guides/howto/rte_flow.html
>>
>> The flow rule example in it is: flow create 0 ingress pattern eth / vlan
>> / ipv4 dst is 192.168.3.2 / end actions drop / end
>>
>> Also, the statistic you get from `show port stats 0` is obtained by
>> reading vsi
>> registers, it will record all packets hardware received even if the
>> packet will
>> be filtered out by the rte flow. Instead, you can try to `set verbose 1`
>> in testpmd,
>> then you can observe whether the rte_flow rule takes effect or not.
>>
>> >- Also, is there a list of Intel NICs that support rte_flow with
>> descriptions of what rte_flow features they support?
>> >
>>
>> Unfortunately, there is no such list info available currently, but we'll
>> try to
>> improve our doc in dpdk repo.
>>
>> @beilei, @qi, correct me if I am wrong.
>>
>> Thanks,
>> Xiaolong
>>
>> >My NIC is an Intel X710. Using DPDK 19.02.
>> >My steps:
>> >*(1) Start testpmd:*
>> >$ sudo ./build/app/testpmd –l 12,13,14 –n 4 -- -i
>> >EAL: Detected 56 lcore(s)
>> >EAL: Detected 2 NUMA nodes
>> >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> >EAL: No free hugepages reported in hugepages-1048576kB
>> >EAL: Probing VFIO support...
>> >EAL: VFIO support initialized
>> >EAL: PCI device 0000:08:00.0 on NUMA socket 0
>> >EAL: probe driver: 8086:1572 net_i40e
>> >EAL: using IOMMU type 1 (Type 1)
>> >EAL: PCI device 0000:08:00.1 on NUMA socket 0
>> >EAL: probe driver: 8086:1572 net_i40e
>> >EAL: PCI device 0000:82:00.0 on NUMA socket 1
>> >EAL: probe driver: 8086:1572 net_i40e
>> >EAL: PCI device 0000:82:00.1 on NUMA socket 1
>> >EAL: probe driver: 8086:1572 net_i40e
>> >Interactive-mode selected
>> >testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=587456,
>> size=2176,
>> >socket=0
>> >testpmd: preferred mempool ops selected: ring_mp_mc
>> >testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=587456,
>> size=2176,
>> >socket=1
>> >testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
>> (socket 0) Port 0: 3C:FD:FE:CE:EC:54 Configuring Port 1 (socket 1) Port 1:
>> 3C:FD:FE:CE:EE:08 Checking link statuses...
>> >Done
>> >testpmd> sta
>> >Port 1: link state change event
>> >Port 0: link state change event
>> >
>> >(*2) Start forwarding traffic between two ports*
>> >testpmd> start
>> >io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
>> enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards
>> packets on 2 streams:
>> > RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
>> > RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>> > io packet forwarding packets/burst=32
>> > nb forwarding cores=1 - nb forwarding ports=2
>> > port 0: RX queue number: 1 Tx queue number: 1
>> > Rx offloads=0x0 Tx offloads=0x10000
>> > RX queue: 0
>> > RX desc=256 - RX free threshold=32
>> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> > RX Offloads=0x0
>> > TX queue: 0
>> > TX desc=256 - TX free threshold=32
>> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> > TX offloads=0x0 - TX RS bit threshold=32
>> > port 1: RX queue number: 1 Tx queue number: 1
>> > Rx offloads=0x0 Tx offloads=0x10000
>> > RX queue: 0
>> > RX desc=256 - RX free threshold=32
>> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> > RX Offloads=0x0
>> > TX queue: 0
>> > TX desc=256 - TX free threshold=32
>> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> > TX offloads=0x0 - TX RS bit threshold=32
>> >
>> >*(3) Check that traffic is flowing: OK*
>> >
>> >testpmd> show port stats all
>> > ######################## NIC statistics for port 0
>> ########################
>> > RX-packets: 2565456 RX-missed: 5780962 RX-bytes: 4512982511
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 2562340 TX-errors: 0 TX-bytes: 1383470795
>> > Throughput (since last show)
>> > Rx-pps: 0
>> > Tx-pps: 0
>> >
>>
>> >####################################################################################################
>> >NIC statistics for port 1
>> >########################
>> > RX-packets: 2563321 RX-missed: 5470250 RX-bytes: 4360388448
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 2565053 TX-errors: 0 TX-bytes: 1384794806
>> > Throughput (since last show)
>> > Rx-pps: 0
>> > Tx-pps: 0
>> >
>>
>> >############################################################################
>> >
>> >testpmd> show port stats all
>> > ######################## NIC statistics for port 0
>> ########################
>> > RX-packets: 3738196 RX-missed: 5781554 RX-bytes: 5226453840
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 3733847 TX-errors: 0 TX-bytes: 2095944031
>> > Throughput (since last show)
>> > Rx-pps: 1146794
>> > Tx-pps: 1145588
>> >
>>
>> >####################################################################################################
>> >NIC statistics for port 1
>> >########################
>> > RX-packets: 3734584 RX-missed: 5470836 RX-bytes: 5073167767
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 3737663 TX-errors: 0 TX-bytes: 2097873710
>> > Throughput (since last show)
>> > Rx-pps: 1145356
>> > Tx-pps: 1146673
>> >
>>
>> >############################################################################
>> >
>> >
>> >*(4) Set filters to drop all eth traffic from both ports*
>> >
>> >
>> >testpmd> flow create 0 ingress pattern eth / end actions drop / end
>> >
>> >Flow rule #0 created
>> >
>> >testpmd> flow create 1 ingress pattern eth / end actions drop / end
>> >
>> >Flow rule #0 created
>> >
>> >
>> >*(5) Check whether traffic is dropped. Traffic is actually not being
>> dropped, so the filter does not seem to work.*
>> >
>> >
>> >testpmd> show port stats all
>> > ######################## NIC statistics for port 0
>> ########################
>> > RX-packets: 22396701 RX-missed: 5807012 RX-bytes: 15827722523
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 22345743 TX-errors: 0 TX-bytes: 12669400435
>> > Throughput (since last show)
>> > Rx-pps: 1149437
>> > Tx-pps: 1146447
>> >
>>
>> >####################################################################################################
>> >NIC statistics for port 1
>> >########################
>> > RX-packets: 22369666 RX-missed: 5496215 RX-bytes: 15660935280
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 22333043 TX-errors: 0 TX-bytes: 12682032499
>> > Throughput (since last show)
>> > Rx-pps: 1147969
>> > Tx-pps: 1145246
>> >
>> >
>>
>> >############################################################################
>> >testpmd> show port stats all
>> > ######################## NIC statistics for port 0
>> ########################
>> > RX-packets: 25054000 RX-missed: 5810727 RX-bytes: 16926854689
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 24977043 TX-errors: 0 TX-bytes: 13764321537
>> > Throughput (since last show)
>> > Rx-pps: 1405948
>> > Tx-pps: 1392192
>> >
>> >
>>
>> >####################################################################################################
>> >NIC statistics for port 1
>> >########################
>> > RX-packets: 25024585 RX-missed: 5499881 RX-bytes: 16758760036
>> > RX-errors: 2
>> > RX-nombuf: 0
>> > TX-packets: 24966843 TX-errors: 0 TX-bytes: 13778185984
>> > Throughput (since last show)
>> > Rx-pps: 1404664
>> > Tx-pps: 1393490
>> >
>>
>> >############################################################################
>> >
>> >Thanks,
>> >
>> >Sruthi
>>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-06-21 10:46 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-17 2:19 [dpdk-users] i40e + rte_flow: Drop filter not working Sruthi Yellamraju
[not found] ` <6DF84BC835AA424FAE2B23EA6A94E5EB669EFD@IRSMSX104.ger.corp.intel.com>
[not found] ` <F35DEAC7BCE34641BA9FAC6BCA4A12E71B38A91B@SHSMSX103.ccr.corp.intel.com>
2019-06-19 11:42 ` [dpdk-users] FW: " Ye Xiaolong
2019-06-19 16:27 ` Cliff Burdick
2019-06-21 10:46 ` Sruthi Yellamraju
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).