DPDK usage discussions
 help / color / mirror / Atom feed
From: Yan Lei <l.yan@epfl.ch>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] [mlx5 + DPDK 19.11] Flow insertion rate less than 4K per sec
Date: Fri, 10 Apr 2020 18:11:15 +0000	[thread overview]
Message-ID: <2cb8c79c6e0a4829996f7a3b56386e89@epfl.ch> (raw)

Hi,


I am doing some study that requires inserting more than 1 million flow rules per second to the NIC. And I runs DPDK 19.11 on a ConnectX-5 NIC.


But I only managed to create around 3.3K rules per second. Below is the code I used to measure the insertion rate:


  uint16_t mask = UINT16_MAX;
  uint64_t timer_start = rte_get_tsc_cycles();

  for (int udp = 0; udp < num_rules; udp++)

     // just a simple wrapper of rte_flow_validate() & rte_flow_create()
    // (Removing validation seems to have little impact on performance)
    // Each rule basically assigns udp packets with specific dst port value to a RX queue
    // 1st arg => NIC port
    //  2nd arg => Priority (This doesn't matter to insertion rate according to my observation)
    //  3rd arg => dst udp port spec
    //  4th arg => dst udp port mask
    //  5th arg => queue index

    generate_dst_udp_flow(0, 1, udp % UINT16_MAX, mask, udp % 12);

  uint64_t timer_val = rte_get_tsc_cycles() - timer_start;
  printf("[BENCH] Create %d udp flow takes %ld us\n", num_rules,
          timer_val * 1000000 / rte_get_tsc_hz());



With 60000 rules I got  [BENCH] Create 60000 udp flow takes 17821419 us.  So 300us for one insertion.... Which is too slow...


According to the mlx5 PMD manual (http://doc.dpdk.org/guides-19.11/nics/mlx5.html), insertion rate should be much higher:

"Flow insertion rate of more then million flows per second, when using Direct Rules."


And this has been introduced since DPDK 19.05 (See the release note http://doc.dpdk.org/guides-19.05/rel_notes/release_19_05.html#new-features  and the patch note here https://mails.dpdk.org/archives/dev/2019-February/125303.html).


Did I miss anything? How can I get the promised 1 million flows per sec?

My setup is as following:

- CPU: E5-2697 v3 (14 cores, SMT disabled, CPU frequency fixed @ 2.6 GHz)
- NIC: Mellanox MCX515A-CCAT (installed on PCIe Gen3 x16)
- DPDK: 19.11

- OFED: 4.7-3.2.9.0  with upstream libs (I also tried standalone RDMA-CORE: v28.0 instead of the one in OFED but got similar results)

- Kernel: 4.15
- OS: Ubuntu 18.04
- Firmware: 16.26.1040

The firmware/driver/dpdk are tuned in the same way as here (http://fast.dpdk.org/doc/perf/DPDK_19_11_Mellanox_NIC_performance_report.pdf).

Your feedbacks will be much appreciated.

Thanks,
Lei

             reply	other threads:[~2020-04-10 18:11 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-10 18:11 Yan Lei [this message]
2020-04-14 10:12 ` Thomas Monjalon
2020-04-14 11:20   ` Yan Lei
2020-04-16 15:32     ` Yan Lei
2020-04-19 13:57       ` Thomas Monjalon
2020-04-19 14:07         ` Wisam Monther
2020-04-20 12:24           ` Tom Barbette
2020-04-20 13:48             ` Yan Lei
2020-04-21  8:59               ` Tom Barbette
2020-04-21 12:30                 ` Raslan Darawsheh
2020-04-24 10:12                   ` Tom Barbette
2020-04-24 12:40                     ` Yan Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2cb8c79c6e0a4829996f7a3b56386e89@epfl.ch \
    --to=l.yan@epfl.ch \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).