From: Timothy McDaniel <timothy.mcdaniel@intel.com>
Cc: dev@dpdk.org, erik.g.carrillo@intel.com,
harry.van.haaren@intel.com, jerinj@marvell.com,
thomas@monjalon.net, timothy.mcdaniel@intel.com
Subject: [dpdk-dev] [PATCH] doc: fix guide for DLB v2.5
Date: Sat, 15 May 2021 12:27:34 -0500 [thread overview]
Message-ID: <1621099654-25535-1-git-send-email-timothy.mcdaniel@intel.com> (raw)
- Remove references to deferred scheduling. That feature applies
to DLB v1.0 only.
- Replace vdev references with the pci devargs equivalent
- Add section for new "vector_opts_enabled" devarg
Fixes: 7c6cc633fc7d ("doc: update guide for DLB v2.5")
Cc: timothy.mcdaniel@intel.com
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
---
doc/guides/eventdevs/dlb2.rst | 54 +++++++++++++++--------------------
1 file changed, 23 insertions(+), 31 deletions(-)
diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 31de6bc47..bce984ca0 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -152,19 +152,19 @@ These pools' sizes are controlled by the nb_events_limit field in struct
rte_event_dev_config. The load-balanced pool is sized to contain
nb_events_limit credits, and the directed pool is sized to contain
nb_events_limit/4 credits. The directed pool size can be overridden with the
-num_dir_credits vdev argument, like so:
+num_dir_credits devargs argument, like so:
.. code-block:: console
- --vdev=dlb2_event,num_dir_credits=<value>
+ --allow ea:00.0,num_dir_credits=<value>
This can be used if the default allocation is too low or too high for the
-specific application needs. The PMD also supports a vdev arg that limits the
+specific application needs. The PMD also supports a devarg that limits the
max_num_events reported by rte_event_dev_info_get():
.. code-block:: console
- --vdev=dlb2_event,max_num_events=<value>
+ --allow ea:00.0,max_num_events=<value>
By default, max_num_events is reported as the total available load-balanced
credits. If multiple DLB-based applications are being used, it may be desirable
@@ -293,27 +293,6 @@ The PMD does not support the following configuration sequences:
This sequence is not supported because the event device must be reconfigured
before its ports or queues can be.
-Deferred Scheduling
-~~~~~~~~~~~~~~~~~~~
-
-The DLB PMD's default behavior for managing a CQ is to "pop" the CQ once per
-dequeued event before returning from rte_event_dequeue_burst(). This frees the
-corresponding entries in the CQ, which enables the DLB to schedule more events
-to it.
-
-To support applications seeking finer-grained scheduling control -- for example
-deferring scheduling to get the best possible priority scheduling and
-load-balancing -- the PMD supports a deferred scheduling mode. In this mode,
-the CQ entry is not popped until the *subsequent* rte_event_dequeue_burst()
-call. This mode only applies to load-balanced event ports with dequeue depth of
-1.
-
-To enable deferred scheduling, use the defer_sched vdev argument like so:
-
- .. code-block:: console
-
- --vdev=dlb2_event,defer_sched=on
-
Atomic Inflights Allocation
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -336,11 +315,11 @@ buffer space (e.g. if not all queues are used, or aren't used for atomic
scheduling).
The PMD provides a dev arg to override the default per-queue allocation. To
-increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
+increase per-queue atomic-inflight allocation to (for example) 64:
.. code-block:: console
- --vdev=dlb2_event,atm_inflights=64
+ --allow ea:00.0,atm_inflights=64
QID Depth Threshold
~~~~~~~~~~~~~~~~~~~
@@ -363,9 +342,9 @@ shown below.
.. code-block:: console
- --vdev=dlb2_event,qid_depth_thresh=all:<threshold_value>
- --vdev=dlb2_event,qid_depth_thresh=qidA-qidB:<threshold_value>
- --vdev=dlb2_event,qid_depth_thresh=qid:<threshold_value>
+ --allow ea:00.0,qid_depth_thresh=all:<threshold_value>
+ --allow ea:00.0,qid_depth_thresh=qidA-qidB:<threshold_value>
+ --allow ea:00.0,qid_depth_thresh=qid:<threshold_value>
Class of service
~~~~~~~~~~~~~~~~
@@ -387,4 +366,17 @@ Class of service can be specified in the devargs, as follows
.. code-block:: console
- --vdev=dlb2_event,cos=<0..4>
+ --allow ea:00.0,cos=<0..4>
+
+Use X86 Vector Instructions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DLB supports using x86 vector instructions to optimize the data path.
+
+The default mode of operation is to use scalar instructions, but
+the use of vector instructions can be enabled in the devargs, as
+follows
+
+ .. code-block:: console
+
+ --allow ea:00.0,vector_opts_enabled=<y/Y>
--
2.23.0
next reply other threads:[~2021-05-15 17:29 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-15 17:27 Timothy McDaniel [this message]
2021-05-16 17:33 ` David Marchand
2021-05-17 13:48 ` McDaniel, Timothy
2021-05-19 9:59 ` David Marchand
2021-05-20 16:14 ` [dpdk-dev] [PATCH] doc: fix devarg references in DLB2 guide Timothy McDaniel
2021-05-20 20:43 ` [dpdk-dev] [PATCH v2] " Timothy McDaniel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1621099654-25535-1-git-send-email-timothy.mcdaniel@intel.com \
--to=timothy.mcdaniel@intel.com \
--cc=dev@dpdk.org \
--cc=erik.g.carrillo@intel.com \
--cc=harry.van.haaren@intel.com \
--cc=jerinj@marvell.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).