From: Rashmi Shetty <rashmi.shetty@intel.com>
To: dev@dpdk.org
Cc: jerinj@marvell.com, harry.van.haaren@intel.com,
pravin.pathak@intel.com, mike.ximing.chen@intel.com,
timothy.mcdaniel@intel.com,
Rashmi Shetty <rashmi.shetty@intel.com>
Subject: [PATCH] doc/dlb2: update dlb2 documentation
Date: Mon, 6 Dec 2021 10:36:39 -0600 [thread overview]
Message-ID: <20211206163639.2220123-1-rashmi.shetty@intel.com> (raw)
Number of direct credits, atomic inflight and history list sizes
are updated to what is supported in DLB2.0. Revised Class of Service
section is added.
Signed-off-by: Rashmi Shetty <rashmi.shetty@intel.com>
---
doc/guides/eventdevs/dlb2.rst | 32 +++++++++++---------------------
1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index bce984ca08..c2887a71dc 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -151,7 +151,7 @@ load-balanced queues, and directed credits are used for directed queues.
These pools' sizes are controlled by the nb_events_limit field in struct
rte_event_dev_config. The load-balanced pool is sized to contain
nb_events_limit credits, and the directed pool is sized to contain
-nb_events_limit/4 credits. The directed pool size can be overridden with the
+nb_events_limit/2 credits. The directed pool size can be overridden with the
num_dir_credits devargs argument, like so:
.. code-block:: console
@@ -239,8 +239,8 @@ queue A.
Due to this, workers should stop retrying after a time, release the events it
is attempting to enqueue, and dequeue more events. It is important that the
worker release the events and don't simply set them aside to retry the enqueue
-again later, because the port has limited history list size (by default, twice
-the port's dequeue_depth).
+again later, because the port has limited history list size (by default, same
+as port's dequeue_depth).
Priority
~~~~~~~~
@@ -309,17 +309,11 @@ scheduled. The likelihood of this case depends on the eventdev configuration,
traffic behavior, event processing latency, potential for a worker to be
interrupted or otherwise delayed, etc.
-By default, the PMD allocates 16 buffer entries for each load-balanced queue,
-which provides an even division across all 128 queues but potentially wastes
+By default, the PMD allocates 64 buffer entries for each load-balanced queue,
+which provides an even division across all 32 queues but potentially wastes
buffer space (e.g. if not all queues are used, or aren't used for atomic
scheduling).
-The PMD provides a dev arg to override the default per-queue allocation. To
-increase per-queue atomic-inflight allocation to (for example) 64:
-
- .. code-block:: console
-
- --allow ea:00.0,atm_inflights=64
QID Depth Threshold
~~~~~~~~~~~~~~~~~~~
@@ -337,7 +331,7 @@ Per queue threshold metrics are tracked in the DLB xstats, and are also
returned in the impl_opaque field of each received event.
The per qid threshold can be specified as part of the device args, and
-can be applied to all queue, a range of queues, or a single queue, as
+can be applied to all queues, a range of queues, or a single queue, as
shown below.
.. code-block:: console
@@ -350,14 +344,10 @@ Class of service
~~~~~~~~~~~~~~~~
DLB supports provisioning the DLB bandwidth into 4 classes of service.
+By default, each of the 4 classes (0-3) correspond to 25% of the DLB
+hardware bandwidth.
-- Class 4 corresponds to 40% of the DLB hardware bandwidth
-- Class 3 corresponds to 30% of the DLB hardware bandwidth
-- Class 2 corresponds to 20% of the DLB hardware bandwidth
-- Class 1 corresponds to 10% of the DLB hardware bandwidth
-- Class 0 corresponds to don't care
-
-The classes are applied globally to the set of ports contained in this
+The classes are applied globally to the set of ports contained in the
scheduling domain, which is more appropriate for the bifurcated
PMD than for the PF PMD, since the PF PMD supports just 1 scheduling
domain.
@@ -366,7 +356,7 @@ Class of service can be specified in the devargs, as follows
.. code-block:: console
- --allow ea:00.0,cos=<0..4>
+ --allow ea:00.0,cos=<0..3>
Use X86 Vector Instructions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -379,4 +369,4 @@ follows
.. code-block:: console
- --allow ea:00.0,vector_opts_enabled=<y/Y>
+ --allow ea:00.0,vector_opts_enable=<y/Y>
--
2.25.1
next reply other threads:[~2021-12-06 16:37 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-06 16:36 Rashmi Shetty [this message]
2021-12-07 23:01 ` [PATCH v2] " Rashmi Shetty
2021-12-14 16:08 ` McDaniel, Timothy
2022-01-20 11:51 ` Jerin Jacob
2021-12-14 14:51 ` [PATCH] " McDaniel, Timothy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211206163639.2220123-1-rashmi.shetty@intel.com \
--to=rashmi.shetty@intel.com \
--cc=dev@dpdk.org \
--cc=harry.van.haaren@intel.com \
--cc=jerinj@marvell.com \
--cc=mike.ximing.chen@intel.com \
--cc=pravin.pathak@intel.com \
--cc=timothy.mcdaniel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).