DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Wieckowski, Jacob" <Jacob.Wieckowski@vector.com>
Cc: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: DMA Transfers to PCIe Bar Memory
Date: Thu, 7 Nov 2024 08:13:07 -0800	[thread overview]
Message-ID: <20241107081307.51902773@hermes.local> (raw)
In-Reply-To: <AS4PR01MB10408DB4E11FB95692AA50774FD5C2@AS4PR01MB10408.eurprd01.prod.exchangelabs.com>

On Thu, 7 Nov 2024 13:09:41 +0000
"Wieckowski, Jacob" <Jacob.Wieckowski@vector.com> wrote:

> Hi Jacob,
> 
> 2024-11-07 11:50 (UTC+0000), Wieckowski, Jacob:
> > Hi Dimitry,
> > 
> > I am sorry, I was a bit too far ahead with my thoughts.
> > 
> > We have started a DPDK evaluation project to build knowledge and understanding of the DPDK framework. 
> > We used an E1000 driver with a workaround to gain access to the PCIe 5.0 R-Tile on the Intel Agilex7 FPGA under Windows. 
> > 
> > Accesses to the PCIe BAR work in principle, but we can currently only carry out 64-bit accesses to the BAR memory.  
> > In the PCIe Config Space in the Capabilities Register, a maximum payload size of 512 bytes is configured. 
> > The Intel Core in the FPGA and the Root Complex also support TLPs of this length. 
> > 
> > We use the rte_read32 and rte_write32 functions to access the bar memory, which obviously executes accesses with max 2 DWs. 
> > We were able to trigger this in the FPGA because only TLP with length 2 arrived on the RX Interface. 
> > 
> > How can a block transfer be initiated in DPDK so that TLPs with a length of 128 DWs are generated on the PCIe Bus?  
> 
> Thanks, now I understand what you need.
> Unfortunately DPDK has no relevant API; rte_read**() is for small registers.
> Maybe there's some useful code in "drivers/raw", but this area is completely unknown to me, sorry.
> Try replying with what you wrote above to the thread in users@dpdk.org and Cc: Intel people from MAINTAINERS file responsible for "drivers/raw".

In general, direct access to PCI by applications is discouraged.
All PCI access should be in driver code. What you describe is a bug in the E1000
driver, and should fixed there.

  reply	other threads:[~2024-11-07 16:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-06 15:50 Wieckowski, Jacob
2024-11-06 17:46 ` Dmitry Kozlyuk
2024-11-07  9:16   ` Wieckowski, Jacob
2024-11-07  9:42     ` Dmitry Kozlyuk
2024-11-07 13:09       ` Wieckowski, Jacob
2024-11-07 16:13         ` Stephen Hemminger [this message]
2024-11-07 16:11       ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241107081307.51902773@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=Jacob.Wieckowski@vector.com \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).