DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 798] mlx5 hw flow performance problem
@ 2021-08-30  7:40 bugzilla
  2021-10-09 17:45 ` bugzilla
  2021-10-14  6:54 ` bugzilla
  0 siblings, 2 replies; 3+ messages in thread
From: bugzilla @ 2021-08-30  7:40 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=798

            Bug ID: 798
           Summary: mlx5 hw flow performance problem
           Product: DPDK
           Version: 21.08
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: critical
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: kangzy1982@qq.com
  Target Milestone: ---

DPDK: 18.11/19.11/20.11/21.02/21.05/21.08
NIC: Mellanox Technologies MT27800 Family [ConnectX-5]
FW: firmware-version: 16.31.1014 (MT_0000000012)
CPU: Intel(R) Xeon(R) Platinum 8170M CPU @ 2.10GHz
KERNEL: 5.4.17-2102.200.13.uek
PKTGEN: IxNetwork 9.00.1915.16
MLNX OFED: MLNX_OFED_LINUX-5.0-2.1.8.0

testpmd:

testpmd -l 26-51 --socket-mem=4096,4096 -w
d8:00.0,dv_flow_en=0,mprq_en=1,rxqs_min_mprq=1,rx_vec_en=1 -- -i --rxq=16
--txq=16 --nb-cor
es=16 --forward-mode=icmpecho --numa --enable-rx-cksum -a --rxd=2048 --txd=2048
--burst=64


testpmd -l 26-51 --socket-mem=4096,4096 -w
d8:00.0,dv_flow_en=1,mprq_en=1,rxqs_min_mprq=1,rx_vec_en=1 -- -i --rxq=16
--txq=16 --nb-cor
es=16 --forward-mode=icmpecho --numa --enable-rx-cksum -a --rxd=2048 --txd=2048
--burst=64


flow:
testpmd> flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / tcp / end
actions queue index  15 / end
Flow rule #0 created
testpmd> flow create 0 ingress pattern eth / ipv4 dst is 1.1.1.1 / udp / end
actions queue index  15 / end
Flow rule #1 created
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions
count / rss / end
Flow rule #2 created


with these flows, no packet matched and testpmd only received 60.1Mpps:
testpmd> show  port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 6512958506 RX-missed: 4476258    RX-bytes:  390777510360
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     60163078          Rx-bps:  28878277584
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> 

with flush all flows, testpmd received 148.8Mpps
testpmd> flow flush  0
testpmd> show  port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 10076834471 RX-missed: 4482703    RX-bytes:  604610068260
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:    148061620          Rx-bps:  71069577904
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd>

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-dev] [Bug 798] mlx5 hw flow performance problem
  2021-08-30  7:40 [dpdk-dev] [Bug 798] mlx5 hw flow performance problem bugzilla
@ 2021-10-09 17:45 ` bugzilla
  2021-10-14  6:54 ` bugzilla
  1 sibling, 0 replies; 3+ messages in thread
From: bugzilla @ 2021-10-09 17:45 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=798

Asaf Penso (asafp@nvidia.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|IN_PROGRESS                 |RESOLVED
         Resolution|---                         |WORKSFORME

--- Comment #3 from Asaf Penso (asafp@nvidia.com) ---
After checking the configuration the issue is gone.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-dev] [Bug 798] mlx5 hw flow performance problem
  2021-08-30  7:40 [dpdk-dev] [Bug 798] mlx5 hw flow performance problem bugzilla
  2021-10-09 17:45 ` bugzilla
@ 2021-10-14  6:54 ` bugzilla
  1 sibling, 0 replies; 3+ messages in thread
From: bugzilla @ 2021-10-14  6:54 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=798

Asaf Penso (asafp@nvidia.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |CONFIRMED
         Resolution|WORKSFORME                  |---

--- Comment #5 from Asaf Penso (asafp@nvidia.com) ---
I missunderstood and thought it was resolved for you.
Can you remove flows #1 and #2? Let's leave only the RSS flow and see.
Also, can  you provide the xstats output? We can see the distribution of the
packets among the queues.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-10-14  6:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-30  7:40 [dpdk-dev] [Bug 798] mlx5 hw flow performance problem bugzilla
2021-10-09 17:45 ` bugzilla
2021-10-14  6:54 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).