From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <users-bounces@dpdk.org> Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E114D4239B for <public@inbox.dpdk.org>; Wed, 11 Jan 2023 19:05:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA0F642D31; Wed, 11 Jan 2023 19:05:27 +0100 (CET) Received: from mail-yb1-f177.google.com (mail-yb1-f177.google.com [209.85.219.177]) by mails.dpdk.org (Postfix) with ESMTP id B990C40A7D for <users@dpdk.org>; Wed, 11 Jan 2023 19:05:26 +0100 (CET) Received: by mail-yb1-f177.google.com with SMTP id 188so15845638ybi.9 for <users@dpdk.org>; Wed, 11 Jan 2023 10:05:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=IKEdwU6YQDpRNvIEKyJgSLXQLZAOOo8pXcLeQJfVF/g=; b=ZWs5lgeLykB4CXE76J0GSBvuB0cG1+eSp3Z9F+8sXTn5EBhl4kuFynEN1Ft6yqCKqc tO1GH5cJj3DbZ0O80DqFhJVoF+1nfzCLygM6qpl98gU+VOstMxgmj+fy6CMpVxx8n5KA +PH/+qhyhZQt0xpLvqHLbBy0CrC+Vyv3dbUfpTg7BQEq1XCrY0hxEvw+H6Sk+kTv/jyZ q+XMQLHtRUhfeBPRhXBQYX4U/3kzxnt+94F2K6nSc76lW3l8hg366ZLqjZZbwe471QfN 2VFfOoOSo9jnq0qWI7PJAkeeZ5FiiFO2fdSsAAzhylOmHQbBbqz9z+Zih0PTX7qT8fP4 tj+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IKEdwU6YQDpRNvIEKyJgSLXQLZAOOo8pXcLeQJfVF/g=; b=753lYMiPDGxUtdFgcXkfcTpK1ARTpgT80wvZyTO9d2XH6JkW0IJ7m61bScaZbZO+41 b9ssHJuW2UaUoQkCMDQMYcjTn/fOZHsfqmLRfe05uyH+cNYI/rSP/O45O5eC1hxe/Lrt 5G+JFFC3RCZxFQV6lcCLPozMqR6vvrHvNlso0b9FYry30ZG9jbkyiTZ2mKynMJF6wgDE JgiVo2NU0sWUMqbwiVOtIg4fgUYU9SGYmCSXxgJmuy/HxV/HOe2nBMHu6jJXMjz9Ojx4 W+ct9FuUU//utCmMtdJaREaxD99RSY8VwPeAscSUlXo74XfvKTGwiAp8oS3QNcmGw4wK Lf+A== X-Gm-Message-State: AFqh2kqJ3FDMcdss6gE601KKM87dmhsziQDTpppAKxfxJewfdsIbZ4Qg Ju3aoeFeaIltiwJlSCfEHbtOGxyYHDcznyJ8lSI= X-Google-Smtp-Source: AMrXdXupDHi/0D8c1vkW1kcZblmuAP360i5vZmdPzjicmwcu8Az/8mZkRW9j74we2M0zCrMKvzOH6zurg/P6NX5JFyI= X-Received: by 2002:a25:bdc8:0:b0:6d5:a323:51b8 with SMTP id g8-20020a25bdc8000000b006d5a32351b8mr9337042ybk.33.1673460318967; Wed, 11 Jan 2023 10:05:18 -0800 (PST) MIME-Version: 1.0 References: <CA+Tq66XLG0_crfX81-egmvuCjwTPC8QX_Db1PHaFryG_au667w@mail.gmail.com> <20230111142600.221cdc2e@sovereign> In-Reply-To: <20230111142600.221cdc2e@sovereign> From: fwefew 4t4tg <7532yahoo@gmail.com> Date: Wed, 11 Jan 2023 13:05:07 -0500 Message-ID: <CA+Tq66V+_cWx9jRR9mJ7Y=pkkoBngn+HfHVko4zBACgbKKYURw@mail.gmail.com> Subject: Re: DPDK and DMA To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> Cc: users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000db9bd605f200d7d2" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions <users.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/users>, <mailto:users-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/users/> List-Post: <mailto:users@dpdk.org> List-Help: <mailto:users-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/users>, <mailto:users-request@dpdk.org?subject=subscribe> Errors-To: users-bounces@dpdk.org --000000000000db9bd605f200d7d2 Content-Type: text/plain; charset="UTF-8" Thank you for taking time to provide a nice reply. The upshot here is that DPDK already uses DMA in a smart way to move packet data into TXQs. I presume the reverse also happens: NIC uses DMA to move packets out of its HW RXQs into the host machine's memory using the mempool associated with it. On Wed, Jan 11, 2023 at 6:26 AM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote: > 2023-01-08 16:05 (UTC-0500), fwefew 4t4tg: > > Consider a valid DPDK TXQ with its mempool of rte_mbufs. Application code > > will allocate a mbuf from the pool and prepare it with headers, data, and > > so on. > > > > When the mbuf(s) are enqueued to the NIC with rte_eth_tx_burst() does > DPDK > > DMA the memory into the NIC? Is this an optimization worth considering? > > DPDK is SW running on CPU. > DMA is a way for HW to access RAM bypassing CPU (thus it is "direct"). > > What happens in rte_eth_tx_burst(): > DPDK fills the packet descriptor and requests the NIC to send the packet. > The NIC subsequently and asynchronously uses DMA to read the packet data. > > Regarding optimizations: > 1. Even if the NIC has some internal buffer where it stores packet data > before sending it to the wire, those buffers are not usually exposed. > 2. If the NIC has on-board memory to store packet data, > this would be implemented by a mempool driver working with such memory. > > > DPDK provides a DMA example here: > > http://doc.dpdk.org/api/examples_2dma_2dmafwd_8c-example.html > > > > Now, to be fair, ultimately whether or not DMA helps must be evidenced > by a > > benchmark. Still, is there any serious reason to make mempools and its > > bufs DMA into and out of the NIC? > > DMA devices in DPDK allow the CPU to initiate an operation on RAM > that will be performed asynchronously by some special HW. > For example, instead of memset() DPDK can tell DMA device > to zero a memory block and avoid spending CPU cycles > (but CPU will need to ensure zeroing completion later). > --000000000000db9bd605f200d7d2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">Thank you=C2=A0for taking time to provide a nice reply. Th= e upshot here is that DPDK<div>already uses DMA in a smart way to move pack= et data into TXQs. I presume the</div><div>reverse also happens: NIC uses D= MA to move packets out of its HW RXQs into=C2=A0</div><div>the host machine= 's memory using the mempool associated with it.<br><br><br></div></div>= <br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Wed= , Jan 11, 2023 at 6:26 AM Dmitry Kozlyuk <<a href=3D"mailto:dmitry.kozli= uk@gmail.com">dmitry.kozliuk@gmail.com</a>> wrote:<br></div><blockquote = class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px sol= id rgb(204,204,204);padding-left:1ex">2023-01-08 16:05 (UTC-0500), fwefew 4= t4tg:<br> > Consider a valid DPDK TXQ with its mempool of rte_mbufs. Application c= ode<br> > will allocate a mbuf from the pool and prepare it with headers, data, = and<br> > so on.<br> > <br> > When the mbuf(s) are enqueued to the NIC with rte_eth_tx_burst() does = DPDK<br> > DMA the memory into the NIC? Is this an optimization worth considering= ?<br> <br> DPDK is SW running on CPU.<br> DMA is a way for HW to access RAM bypassing CPU (thus it is "direct&qu= ot;).<br> <br> What happens in rte_eth_tx_burst():<br> DPDK fills the packet descriptor and requests the NIC to send the packet.<b= r> The NIC subsequently and asynchronously uses DMA to read the packet data.<b= r> <br> Regarding optimizations:<br> 1. Even if the NIC has some internal buffer where it stores packet data<br> before sending it to the wire, those buffers are not usually exposed.<br> 2. If the NIC has on-board memory to store packet data,<br> this would be implemented by a mempool driver working with such memory.<br> <br> > DPDK provides a DMA example here:<br> > <a href=3D"http://doc.dpdk.org/api/examples_2dma_2dmafwd_8c-example.ht= ml" rel=3D"noreferrer" target=3D"_blank">http://doc.dpdk.org/api/examples_2= dma_2dmafwd_8c-example.html</a><br> > <br> > Now, to be fair, ultimately whether or not DMA helps must be evidenced= by a<br> > benchmark. Still, is there any serious reason to make mempools and its= <br> > bufs DMA into and out of the NIC?<br> <br> DMA devices in DPDK allow the CPU to initiate an operation on RAM<br> that will be performed asynchronously by some special HW.<br> For example, instead of memset() DPDK can tell DMA device<br> to zero a memory block and avoid spending CPU cycles<br> (but CPU will need to ensure zeroing completion later).<br> </blockquote></div> --000000000000db9bd605f200d7d2--