DPDK usage discussions
 help / color / mirror / Atom feed
From: Дмитрий Степанов <stepanov.dmit@gmail.com>
To: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: Flow rules performance with ConnectX-6 Dx
Date: Fri, 11 Mar 2022 15:58:22 +0300
Message-ID: <CA+-SuJ13sdCEAsDTm6Q24B5ZrN88r+2Sb2dtW2beA9s6aFWpZQ@mail.gmail.com> (raw)
In-Reply-To: <BL1PR12MB5945A3B07DD85B9AB157D91BB90C9@BL1PR12MB5945.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 3041 bytes --]

Hey, Dmitry!
Thanks for reply!

I'm using global RSS configuration (configured using rte_eth_dev_configure)
which distributes incoming packets to different queues. And each queue is
handled by different lcore.
I've checked that incoming traffic is properly distributed among them. For
example, in case of 16 queues (lcores) I see about 900 Kpps per lcore which
in sum gives 15 Mpps.

I was able to reproduce the same using testpmd utility

My steps:

- Start generator at 50 Mpps with 2 IP dest addresses: 10.0.0.1 and 10.0.0.2

- Start testpmd in interactive mode with 16 queues/lcores:

numactl -N 1 -m 1 ./dpdk-testpmd -l 64-127 -a 0000:c1:00.0  --
--nb-cores=16 --rxq=16 --txq=16 -i

- Create flow rule:

testpmd> flow create 0 group 0 priority 0 ingress pattern eth / ipv4 dst is
10.0.0.2 / end actions drop / end

- Start forwarding:

testpmd> start

- Show stats (it shows the same 15Mpps instead of expected 25 Mpps)

testpmd> show port stats 0

  ######################## NIC statistics for port 0
 ########################
  RX-packets: 1127219612 RX-missed: 0          RX-bytes:  67633178722
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 1127219393 TX-errors: 0          TX-bytes:  67633171416

  Throughput (since last show)
  Rx-pps:     14759286          Rx-bps:   7084457512
  Tx-pps:     14758730          Tx-bps:   7084315448

############################################################################

- Ensure incoming traffic is properly distributed among queues (lcores):

testpmd> show port xstats 0

rx_q0_packets: 21841125
rx_q1_packets: 21847375
rx_q2_packets: 21833731
rx_q3_packets: 21837461
rx_q4_packets: 21842922
rx_q5_packets: 21843999
rx_q6_packets: 21838775
rx_q7_packets: 21833429
rx_q8_packets: 21838033
rx_q9_packets: 21835210
rx_q10_packets: 21833261
rx_q11_packets: 21833059
rx_q12_packets: 21849831
rx_q13_packets: 21843589
rx_q14_packets: 21842721
rx_q15_packets: 21834222

- If I use IP dest addresses which don't match drop rule (replace 10.0.0.2
by 10.0.0.3) - I get expected 50 Mpps

  ######################## NIC statistics for port 0
 ########################
  RX-packets: 1988576249 RX-missed: 0          RX-bytes:  119314577228
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 1988576248 TX-errors: 0          TX-bytes:  119314576882

  Throughput (since last show)
  Rx-pps:     49999534          Rx-bps:  23999776424
  Tx-pps:     49999580          Tx-bps:  23999776424

############################################################################

пт, 11 мар. 2022 г. в 14:37, Dmitry Kozlyuk <dkozlyuk@nvidia.com>:

> Hi Dmitry,
>
> Can it be that RSS, to which non-matching traffic gets by default,
> is configured in a way that steers each destination IP to one queue?
> And this 15 Mpps is in fact how much a core can read from a queue?
> In general, it is always worth trying to reproduce the issue with testpmd
> and to describe flow rules in full testpmd format ("flow create...").
>

[-- Attachment #2: Type: text/html, Size: 3604 bytes --]

  reply	other threads:[~2022-03-11 12:58 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-11 11:10 Дмитрий Степанов
2022-03-11 11:37 ` Dmitry Kozlyuk
2022-03-11 12:58   ` Дмитрий Степанов [this message]
2022-03-18 14:54     ` Dmitry Kozlyuk
2022-03-21  8:40       ` Дмитрий Степанов

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+-SuJ13sdCEAsDTm6Q24B5ZrN88r+2Sb2dtW2beA9s6aFWpZQ@mail.gmail.com \
    --to=stepanov.dmit@gmail.com \
    --cc=dkozlyuk@nvidia.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git