From: "Wieckowski, Jacob" <Jacob.Wieckowski@vector.com>
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: RE: DMA Transfers to PCIe Bar Memory
Date: Thu, 7 Nov 2024 13:09:41 +0000 [thread overview]
Message-ID: <AS4PR01MB10408DB4E11FB95692AA50774FD5C2@AS4PR01MB10408.eurprd01.prod.exchangelabs.com> (raw)
In-Reply-To: <20241107124225.50c22ee2@sovereign>
Hi Jacob,
2024-11-07 11:50 (UTC+0000), Wieckowski, Jacob:
> Hi Dimitry,
>
> I am sorry, I was a bit too far ahead with my thoughts.
>
> We have started a DPDK evaluation project to build knowledge and understanding of the DPDK framework.
> We used an E1000 driver with a workaround to gain access to the PCIe 5.0 R-Tile on the Intel Agilex7 FPGA under Windows.
>
> Accesses to the PCIe BAR work in principle, but we can currently only carry out 64-bit accesses to the BAR memory.
> In the PCIe Config Space in the Capabilities Register, a maximum payload size of 512 bytes is configured.
> The Intel Core in the FPGA and the Root Complex also support TLPs of this length.
>
> We use the rte_read32 and rte_write32 functions to access the bar memory, which obviously executes accesses with max 2 DWs.
> We were able to trigger this in the FPGA because only TLP with length 2 arrived on the RX Interface.
>
> How can a block transfer be initiated in DPDK so that TLPs with a length of 128 DWs are generated on the PCIe Bus?
Thanks, now I understand what you need.
Unfortunately DPDK has no relevant API; rte_read**() is for small registers.
Maybe there's some useful code in "drivers/raw", but this area is completely unknown to me, sorry.
Try replying with what you wrote above to the thread in users@dpdk.org and Cc: Intel people from MAINTAINERS file responsible for "drivers/raw".
-----Original Message-----
From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Sent: Thursday, November 7, 2024 10:42 AM
To: Wieckowski, Jacob <Jacob.Wieckowski@vector.com>
Cc: users@dpdk.org
Subject: Re: DMA Transfers to PCIe Bar Memory
[Sie erhalten nicht häufig E-Mails von dmitry.kozliuk@gmail.com. Weitere Informationen, warum dies wichtig ist, finden Sie unter https://aka.ms/LearnAboutSenderIdentification ]
2024-11-07 09:16 (UTC+0000), Wieckowski, Jacob:
> Hi Dimitry,
>
> thank you for the quick response.
>
> Ok, DMA in the classic sense is not possible.
>
> However, if you carry out a write transfer into the BAR memory from DPDK, then, as I understand it, this access should be divided into several small postage-compliant TLP packets with a maximum payload size as specified in config space.
>
> Can block transfers in sizes of 512 bytes be carried out with the rte memcpy? The DPDK API states that the AVX-512 memcpy parameter must be enabled for x86 platforms.
>
> Do other special precautions have to be taken in the DPDK environment to setup this kind of transfer?
Could you please start with the problem you're solving?
DPDK uses DMA internally (mainly) to transfer packet data from/to HW.
It puts physical address of the buffer, etc. to NIC queue descriptor, writes to a doorbell register, then the NIC DMA-writes/reads the buffer; PCI transfer sizes are probably selected by HW.
All of this is within PMD (userspace drivers), no API is exposed.
rte_memcpy() is intended for copy from RAM to RAM.
You can Cc: Morten Brørup <mb@smartsharesystems.com> probably, but I doubt that rte_memcpy() is specialized for DMA in any way.
The buffer may be filled with rte_mempcy() by application, but this is done before handling the buffer to PMD, and thus before DMA.
Are you looking for functionality of "dmadev" library?
https://doc.dpdk.org/guides/prog_guide/dmadev.html
next prev parent reply other threads:[~2024-11-07 13:09 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-06 15:50 Wieckowski, Jacob
2024-11-06 17:46 ` Dmitry Kozlyuk
2024-11-07 9:16 ` Wieckowski, Jacob
2024-11-07 9:42 ` Dmitry Kozlyuk
2024-11-07 13:09 ` Wieckowski, Jacob [this message]
2024-11-07 16:13 ` Stephen Hemminger
2024-11-07 16:11 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AS4PR01MB10408DB4E11FB95692AA50774FD5C2@AS4PR01MB10408.eurprd01.prod.exchangelabs.com \
--to=jacob.wieckowski@vector.com \
--cc=dmitry.kozliuk@gmail.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).