DPDK usage discussions
 help / color / mirror / Atom feed
From: fwefew 4t4tg <7532yahoo@gmail.com>
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Cc: users@dpdk.org
Subject: Re: DPDK and DMA
Date: Wed, 11 Jan 2023 13:05:07 -0500	[thread overview]
Message-ID: <CA+Tq66V+_cWx9jRR9mJ7Y=pkkoBngn+HfHVko4zBACgbKKYURw@mail.gmail.com> (raw)
In-Reply-To: <20230111142600.221cdc2e@sovereign>

[-- Attachment #1: Type: text/plain, Size: 1967 bytes --]

Thank you for taking time to provide a nice reply. The upshot here is that
DPDK
already uses DMA in a smart way to move packet data into TXQs. I presume the
reverse also happens: NIC uses DMA to move packets out of its HW RXQs into
the host machine's memory using the mempool associated with it.



On Wed, Jan 11, 2023 at 6:26 AM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
wrote:

> 2023-01-08 16:05 (UTC-0500), fwefew 4t4tg:
> > Consider a valid DPDK TXQ with its mempool of rte_mbufs. Application code
> > will allocate a mbuf from the pool and prepare it with headers, data, and
> > so on.
> >
> > When the mbuf(s) are enqueued to the NIC with rte_eth_tx_burst() does
> DPDK
> > DMA the memory into the NIC? Is this an optimization worth considering?
>
> DPDK is SW running on CPU.
> DMA is a way for HW to access RAM bypassing CPU (thus it is "direct").
>
> What happens in rte_eth_tx_burst():
> DPDK fills the packet descriptor and requests the NIC to send the packet.
> The NIC subsequently and asynchronously uses DMA to read the packet data.
>
> Regarding optimizations:
> 1. Even if the NIC has some internal buffer where it stores packet data
> before sending it to the wire, those buffers are not usually exposed.
> 2. If the NIC has on-board memory to store packet data,
> this would be implemented by a mempool driver working with such memory.
>
> > DPDK provides a DMA example here:
> > http://doc.dpdk.org/api/examples_2dma_2dmafwd_8c-example.html
> >
> > Now, to be fair, ultimately whether or not DMA helps must be evidenced
> by a
> > benchmark. Still, is there any serious reason to make mempools and its
> > bufs DMA into and out of the NIC?
>
> DMA devices in DPDK allow the CPU to initiate an operation on RAM
> that will be performed asynchronously by some special HW.
> For example, instead of memset() DPDK can tell DMA device
> to zero a memory block and avoid spending CPU cycles
> (but CPU will need to ensure zeroing completion later).
>

[-- Attachment #2: Type: text/html, Size: 2553 bytes --]

  reply	other threads:[~2023-01-11 18:05 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-08 21:05 fwefew 4t4tg
2023-01-11 11:26 ` Dmitry Kozlyuk
2023-01-11 18:05   ` fwefew 4t4tg [this message]
2023-01-11 18:54     ` Stephen Hemminger
2023-01-11 20:14     ` Dmitry Kozlyuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+Tq66V+_cWx9jRR9mJ7Y=pkkoBngn+HfHVko4zBACgbKKYURw@mail.gmail.com' \
    --to=7532yahoo@gmail.com \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).