From: Jerin Jacob <jerinjacobk@gmail.com>
To: Timothy McDaniel <timothy.mcdaniel@intel.com>
Cc: Jerin Jacob <jerinj@marvell.com>, dpdk-dev <dev@dpdk.org>
Subject: Re: [PATCH] doc: update DLB2 documentation
Date: Mon, 4 Jul 2022 21:41:59 +0530 [thread overview]
Message-ID: <CALBAE1ObNL-b3Rhr_d26H2VDMR+N4jcLmuttDAXO4rm0w7ycmA@mail.gmail.com> (raw)
In-Reply-To: <20220702193500.1654078-1-timothy.mcdaniel@intel.com>
On Sun, Jul 3, 2022 at 1:05 AM Timothy McDaniel
<timothy.mcdaniel@intel.com> wrote:
>
> This commit updates the dlb2.rst eventdev guide to document the
> new DLB2 features that were added to dpdk 22.07.
> 1) CQ Weight
> 2) Port COS
> 3) Maximum CQ depth
> 4) Maximum enqueue depth
>
> Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
Updated the git commit as follows and applied to
dpdk-next-net-eventdev/for-main. Thanks
commit 69aec6abd96030f4197685573c8a60395e573c63 (HEAD -> for-main)
Author: Timothy McDaniel <timothy.mcdaniel@intel.com>
Date: Sat Jul 2 14:35:00 2022 -0500
doc: update DLB2 documentation
Updated DLB2 guide to document following features.
1) CQ Weight
2) Port COS
3) Maximum CQ depth
4) Maximum enqueue depth
Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
> doc/guides/eventdevs/dlb2.rst | 67 ++++++++++++++++++++++++++++-------
> 1 file changed, 54 insertions(+), 13 deletions(-)
>
> diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
> index bc53618b53..5b21f13b68 100644
> --- a/doc/guides/eventdevs/dlb2.rst
> +++ b/doc/guides/eventdevs/dlb2.rst
> @@ -343,23 +343,21 @@ Class of service
> ~~~~~~~~~~~~~~~~
>
> DLB supports provisioning the DLB bandwidth into 4 classes of service.
> +A LDB port or range of LDB ports may be configured to use one of the classes.
> +If a port's COS is not defined, then it will be allocated from class 0,
> +class 1, class 2, or class 3, in that order, depending on availability.
>
> -- Class 4 corresponds to 40% of the DLB hardware bandwidth
> -- Class 3 corresponds to 30% of the DLB hardware bandwidth
> -- Class 2 corresponds to 20% of the DLB hardware bandwidth
> -- Class 1 corresponds to 10% of the DLB hardware bandwidth
> -- Class 0 corresponds to don't care
> -
> -The classes are applied globally to the set of ports contained in this
> -scheduling domain, which is more appropriate for the bifurcated
> -PMD than for the PF PMD, since the PF PMD supports just 1 scheduling
> -domain.
> -
> -Class of service can be specified in the devargs, as follows
> +The sum of the cos_bw values may not exceed 100, and no more than
> +16 LDB ports may be assigned to a given class of service. If port cos is
> +not defined on the command line, then each class is assigned 25% of the
> +bandwidth, and the available load balanced ports are split between the classes.
> +Per-port class of service and bandwidth can be specified in the devargs,
> +as follows.
>
> .. code-block:: console
>
> - --allow ea:00.0,cos=<0..4>
> + --allow ea:00.0,port_cos=Px-Py:<0-3>,cos_bw=5:10:80:5
> + --allow ea:00.0,port_cos=Px:<0-3>,cos_bw=5:10:80:5
>
> Use X86 Vector Instructions
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
> @@ -373,3 +371,46 @@ follows
> .. code-block:: console
>
> --allow ea:00.0,vector_opts_enabled=<y/Y>
> +
> +Maximum CQ Depth
> +~~~~~~~~~~~~~~~~
> +
> +DLB supports configuring the maximum depth of a consumer queue (CQ).
> +The depth must be between 32 and 128, and must be a power of 2. Note
> +that credit deadlocks may occur as a result of changing the default depth.
> +To prevent deadlock, the user may also need to configure the maximum
> +enqueue depth.
> +
> + .. code-block:: console
> +
> + --allow ea:00.0,max_cq_depth=<depth>
> +
> +Maximum Enqueue Depth
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +DLB supports configuring the maximum enqueue depth of a producer port (PP).
> +The depth must be between 32 and 1024, and must be a power of 2.
> +
> + .. code-block:: console
> +
> + --allow ea:00.0,max_enqueue_depth=<depth>
> +
> +QE Weight
> +~~~~~~~~~
> +
> +DLB supports advanced scheduling mechanisms, such as CQ weight.
> +Each load balanced CQ has a configurable work capacity (max 256)
> +which corresponds to the total QE weight DLB will allow to be enqueued
> +to that consumer. Every load balanced event/QE carries a weight of 0, 2, 4,
> +or 8 and DLB will increment a (per CQ) load indicator when it schedules a
> +QE to that CQ. The weight is also stored in the history list. When a
> +completion arrives, the weight is popped from the history list and used to
> +decrement the load indicator. This creates a new scheduling condition - a CQ
> +whose load is equal to or in excess of capacity is not available for traffic.
> +Note that the weight may not exceed the maximum CQ depth.
> +
> + .. code-block:: console
> +
> + --allow ea:00.0,cq_weight=all:<weight>
> + --allow ea:00.0,cq_weight=qidA-qidB:<weight>
> + --allow ea:00.0,cq_weight=qid:<weight>
> --
> 2.25.1
>
prev parent reply other threads:[~2022-07-04 16:12 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-02 19:35 Timothy McDaniel
2022-07-04 16:11 ` Jerin Jacob [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALBAE1ObNL-b3Rhr_d26H2VDMR+N4jcLmuttDAXO4rm0w7ycmA@mail.gmail.com \
--to=jerinjacobk@gmail.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=timothy.mcdaniel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).