DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Dylan Baros <dcbaros@utexas.edu>
Cc: users@dpdk.org
Subject: Re: How do you calculate DPDK mempool requirements?
Date: Mon, 5 Dec 2022 10:28:51 -0800	[thread overview]
Message-ID: <20221205102851.73254954@hermes.local> (raw)
In-Reply-To: <CAMy0=Dft22A5AaLETyf9X6_KZDirkqCxBgmQLXp41uDBDn3r4w@mail.gmail.com>

On Mon, 5 Dec 2022 11:32:40 -0600
Dylan Baros <dcbaros@utexas.edu> wrote:

> How do you figure out what settings need to be used to correctly configure
> a DPDK mempool for your application?
> 
> Specifically using rte_pktmbuf_pool_create():
> 
> - n, the number of elements in the mbuf pool
> - cache_size
> - priv_size
> - data_room_size
> 
> EAL arguments:
> - n number of memory channels
> - r number of memory ranks
> - m amount of memory to preallocate at startup
> - in-memory no shared data structures
> - IOVA mode
> - huge-worker-stack
> 
> My setup:
> 
>    - 2 x Intel Xeon Gold 6348 CPU @ 2.6 Ghz
>       - 28 cores per socket
>       - Max 3.5 Ghz
>       - Hyperthreading disabled
>       - Ubuntu 22.04.1 LTS
>       - Kernel 5.15.0-53-generic
>       - Cores set to performance governor
>       - 4 x Sabrent 2TB Rocket 4 Plus in RAID0 Config
>       - 128 GB DDR4 Memory
>       - 10 1GB HugePages (Can change to what is required)
>    - 1 x Mellanox ConnectX-5 100gbe NIC
>       - 31:00.0 Ethernet controller: Mellanox Technologies MT27800 Family
>       [ConnectX-5]
>       - Firmware-version: 16.35.1012
>    - UDP Source:
>       - 100 gbe NIC
>       - 9000 MTU Packets
>       - ipv4-udp packets
> 
> Will be receiving 10GB/s UDP packets over a 100gbe link. Right now trying
> to get it working for 2GB/s to a single queue.
> 
> Reviewed the DPDK Programmers guide:
> https://doc.dpdk.org/guides/prog_guide/mempool_lib.html Also searched
> online but the resources seem limited. Would appreciate any help or a push
> in the right direction.
> 
> 
> Sincerely,
> 
> DB

Compute the maximum number of live memory elements in the system.
This depends on the NIC card and the queue depths.
Add some additional overhead for packets in flight.

Size of mbuf depends on MTU, amount of reserved private area etc.


  reply	other threads:[~2022-12-05 18:28 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-05 17:32 Dylan Baros
2022-12-05 18:28 ` Stephen Hemminger [this message]
2022-12-06 12:16 ` IraM
2022-12-06 16:24   ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221205102851.73254954@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=dcbaros@utexas.edu \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).