DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@amd.com>
To: Shepard Siegel <shepard.siegel@atomicrules.com>
Cc: ed.czeck@atomicrules.com, dev@dpdk.org
Subject: Re: [PATCH 1/4] doc: clarify the existing net/ark guide
Date: Mon, 13 Feb 2023 22:58:32 +0000	[thread overview]
Message-ID: <4e5f2100-9c5e-ee9b-0c45-5a4351461b0a@amd.com> (raw)
In-Reply-To: <CAMMLSKBJeyFwOO4zySEXKG+44X4YA065geKpVWp4QAnFoovVKA@mail.gmail.com>

On 2/13/2023 5:31 PM, Shepard Siegel wrote:
> Yes, what is different here is that the MBUF size is communicated
> from the PMD to the hardware which *changes its behavior* of data motion
> to optimize throughput and latency as a function of that setting. And it
> does that per-queue. And can be done at runtime (that's the
> dynamic part). ... To the best our knowledge, other PMDs use this as a
> host-software setting only - and their DPDK naive DMA engines just use
> the same fixed settings (respecting PCIe, of course).
> 
> Hope that helps. If it is contentious in any way, we are fine with
> removing that line. We added it as users have remarked it is a unique
> capability they think we should point out.
> 

Just trying to clarify the feature you are referring, this helps to your
users too.

I was thinking this feature is already granted and if there is more
details in the documented feature, but it is not granted and OK to keep
as it is.


> -Shep
> 
> 
> On Mon, Feb 13, 2023 at 12:23 PM Ferruh Yigit <ferruh.yigit@amd.com
> <mailto:ferruh.yigit@amd.com>> wrote:
> 
>     On 2/13/2023 5:09 PM, Shepard Siegel wrote:
>     > Hi Ferruh,
>     >
>     > Yes, there will probably be next versions in the future. If you don't
>     > mind making the marker length adjustment, that would be great.
>     >
>     > Regarding MBUF (re)sizing  - Arkville supports the ability to
>     configure
>     > or reconfigure the MBUF size used on a per-queue basis. This
>     feature is
>     > useful when the are conflicting motivations for using smaller/larger
>     > MBUF sizes. For example, user can switch a queue to use a size
>     best for
>     > that queue's application workload.
>     >
> 
>     Application can allocate multiple mempool with different sizes and set
>     these to specific queues, this is same for all PMDs, is ark PMD doing
>     something specific here? Or are you referring to something else?
> 
>     And what does 'dynamic' emphasis means here?
> 
> 
>     > -Shep
>     >
>     >
>     > On Mon, Feb 13, 2023 at 10:46 AM Ferruh Yigit
>     <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>
>     > <mailto:ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>>> wrote:
>     >
>     >     On 2/13/2023 2:58 PM, Shepard Siegel wrote:
>     >     > Add detail for the existing Arkville configurations FX0 and FX1.
>     >     > Corrected minor errors of omission.
>     >     >
>     >     > Signed-off-by: Shepard Siegel
>     <shepard.siegel@atomicrules.com <mailto:shepard.siegel@atomicrules.com>
>     >     <mailto:shepard.siegel@atomicrules.com
>     <mailto:shepard.siegel@atomicrules.com>>>
>     >     > ---
>     >     >  doc/guides/nics/ark.rst | 18 ++++++++++++++++++
>     >     >  1 file changed, 18 insertions(+)
>     >     >
>     >     > diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst
>     >     > index ba00f14e80..edaa02dc96 100644
>     >     > --- a/doc/guides/nics/ark.rst
>     >     > +++ b/doc/guides/nics/ark.rst
>     >     > @@ -52,6 +52,10 @@ board. While specific capabilities such as
>     >     number of physical
>     >     >  hardware queue-pairs are negotiated; the driver is designed to
>     >     >  remain constant over a broad and extendable feature set.
>     >     > 
>     >     > +* FPGA Vendors Supported: AMD/Xilinx and Intel
>     >     > +* Number of RX/TX Queue-Pairs: up to 128
>     >     > +* PCIe Endpoint Technology: Gen3, Gen4, Gen5
>     >     > +
>     >     >  Intentionally, Arkville by itself DOES NOT provide common NIC
>     >     >  capabilities such as offload or receive-side scaling (RSS).
>     >     >  These capabilities would be viewed as a gate-level "tax" on
>     >     > @@ -303,6 +307,18 @@ ARK PMD supports the following Arkville RTL
>     >     PCIe instances including:
>     >     >  * ``1d6c:101e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover for
>     >     Agilex R-Tile]
>     >     >  * ``1d6c:101f`` - AR-TK242 [2x100GbE Packet Capture Device]
>     >     > 
>     >     > +Arkville RTL Core Configurations
>     >     > +-------------------------------------
>     >     > +
>     >
>     >     The title marker length (-) should be same as title length,
>     can you
>     >     please fix if there will be next version, if not I can fix while
>     >     merging.
>     >
>     >
>     >     > +Arkville's RTL core may be configured by the user with
>     different
>     >     > +datapath widths to balance throughput against FPGA logic area.
>     >     The ARK PMD
>     >     > +has introspection on the RTL core configuration and acts
>     accordingly.
>     >     > +All Arkville configurations present identical RTL
>     user-facing AXI
>     >     stream
>     >     > +interfaces for both AMD/Xilinx and Intel FPGAs.
>     >     > +
>     >     > +* ARK-FX0 - 256-bit 32B datapath (PCIe Gen3, Gen4)
>     >     > +* ARK-FX1 - 512-bit 64B datapath (PCIe Gen3, Gen4, Gen5)
>     >     > +
>     >     >  DPDK and Arkville Firmware Versioning
>     >     >  -------------------------------------
>     >     > 
>     >     > @@ -334,6 +350,8 @@ Supported Features
>     >     >  ------------------
>     >     > 
>     >     >  * Dynamic ARK PMD extensions
>     >     > +* Dynamic per-queue MBUF (re)sizing up to 32KB
>     >
>     >     What is this feature? What does it mean to size/resize mbuf
>     dynamically?
>     >
>     >     > +* SR-IOV, VF-based queue-segregation
>     >     >  * Multiple receive and transmit queues
>     >     >  * Jumbo frames up to 9K
>     >     >  * Hardware Statistics
>     >
> 


  reply	other threads:[~2023-02-13 22:58 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-11 14:14 [PATCH 1/2] net/ark: add new device to PCIe allowlist Shepard Siegel
2023-02-11 14:14 ` [PATCH 2/2] doc: update ark guide to include new PCIe device Shepard Siegel
2023-02-13 13:46 ` [PATCH 1/2] net/ark: add new device to PCIe allowlist Ferruh Yigit
2023-02-13 15:02   ` Shepard Siegel
2023-02-13 14:58 ` [PATCH 1/4] doc: clarify the existing net/ark guide Shepard Siegel
2023-02-13 14:58   ` [PATCH 2/4] net/ark: add new device to PCIe allowlist Shepard Siegel
2023-02-13 15:51     ` Ferruh Yigit
2023-02-13 17:12       ` Shepard Siegel
2023-02-13 14:58   ` [PATCH 3/4] doc: update ark guide to include new PCIe device Shepard Siegel
2023-02-13 14:58   ` [PATCH 4/4] doc: update Release Notes Shepard Siegel
2023-02-13 15:46   ` [PATCH 1/4] doc: clarify the existing net/ark guide Ferruh Yigit
2023-02-13 17:09     ` Shepard Siegel
2023-02-13 17:23       ` Ferruh Yigit
2023-02-13 17:31         ` Shepard Siegel
2023-02-13 22:58           ` Ferruh Yigit [this message]
2023-02-13 19:58 ` [PATCH 1/2] " Shepard Siegel
2023-02-13 19:58   ` [PATCH 2/2] net/ark: add new ark PCIe device Shepard Siegel
2023-02-13 23:39     ` Ferruh Yigit
2023-02-13 23:39   ` [PATCH 1/2] doc: clarify the existing net/ark guide Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4e5f2100-9c5e-ee9b-0c45-5a4351461b0a@amd.com \
    --to=ferruh.yigit@amd.com \
    --cc=dev@dpdk.org \
    --cc=ed.czeck@atomicrules.com \
    --cc=shepard.siegel@atomicrules.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).