* [PATCH 1/9] doc: reword design section in contributors guidelines
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 2/9] doc: reword pmd section in prog guide Nandini Persad
` (8 subsequent siblings)
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
minor editing for grammar and syntax of design section
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
.mailmap | 1 +
doc/guides/contributing/design.rst | 79 ++++++++++++++----------------
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
3 files changed, 38 insertions(+), 44 deletions(-)
diff --git a/.mailmap b/.mailmap
index 66ebc20666..7d4929c5d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1002,6 +1002,7 @@ Naga Suresh Somarowthu <naga.sureshx.somarowthu@intel.com>
Nalla Pradeep <pnalla@marvell.com>
Na Na <nana.nn@alibaba-inc.com>
Nan Chen <whutchennan@gmail.com>
+Nandini Persad <nandinipersad361@gmail.com>
Nannan Lu <nannan.lu@intel.com>
Nan Zhou <zhounan14@huawei.com>
Narcisa Vasile <navasile@linux.microsoft.com> <navasile@microsoft.com> <narcisa.vasile@microsoft.com>
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index b724177ba1..921578aec5 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -8,22 +8,26 @@ Design
Environment or Architecture-specific Sources
--------------------------------------------
-In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, x86_64) or environment-specific (FreeBsd or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
-By convention, a file is common if it is not located in a directory indicating that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to this architecture.
+By convention, a file is specific if the directory is indicated. Otherwise, it is common.
+
+For example:
+
+A file located in a subdir of "x86_64" directory is specific to this architecture.
A file located in a subdir of "linux" is specific to this execution environment.
.. note::
Code in DPDK libraries and applications should be generic.
- The correct location for architecture or executive environment specific code is in the EAL.
+ The correct location for architecture or executive environment-specific code is in the EAL.
+
+When necessary, there are several ways to handle specific code:
-When absolutely necessary, there are several ways to handle specific code:
-* Use a ``#ifdef`` with a build definition macro in the C code.
- This can be done when the differences are small and they can be embedded in the same C file:
+* When the differences are small and they can be embedded in the same C file, use a ``#ifdef`` with a build definition macro in the C code.
+
.. code-block:: c
@@ -33,9 +37,9 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
- In this case, the code is split into two separate files that are architecture or environment specific.
- This should only apply inside the EAL library.
+* When the differences are more significant, use build definition macros and conditions in the Meson build file.
+In this case, the code is split into two separate files that are architecture or environment specific.
+This should only apply inside the EAL library.
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -43,7 +47,7 @@ Per Architecture Sources
The following macro options can be used:
* ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when building for these architectures.
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -51,30 +55,21 @@ Per Execution Environment Sources
The following macro options can be used:
* ``RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only when building for this execution environment.
Mbuf features
-------------
-The ``rte_mbuf`` structure must be kept small (128 bytes).
-
-In order to add new features without wasting buffer space for unused features,
-some fields and flags can be registered dynamically in a shared area.
-The "dynamic" mbuf area is the default choice for the new features.
-
-The "dynamic" area is eating the remaining space in mbuf,
-and some existing "static" fields may need to become "dynamic".
+A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice for accomodating new features. The "dynamic" area consumes the remaining space in the mbuf, indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept small (128 bytes).
-Adding a new static field or flag must be an exception matching many criteria
-like (non exhaustive): wide usage, performance, size.
+As more features are added, the space for existinG=g "static" fields (fields that are allocated statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new static field or flag should be an exception. It must meet specific criteria including widespread usage, performance impact, and size considerations. Before adding a new static feature, it must be justified by its necessity and its impact on the system's efficiency.
Runtime Information - Logging, Tracing and Telemetry
----------------------------------------------------
-It is often desirable to provide information to the end-user
-as to what is happening to the application at runtime.
-DPDK provides a number of built-in mechanisms to provide this introspection:
+The end user may inquire as to what is happening to the application at runtime.
+DPDK provides several built-in mechanisms to provide these insights:
* :ref:`Logging <dynamic_logging>`
* :doc:`Tracing <../prog_guide/trace_lib>`
@@ -82,11 +77,11 @@ DPDK provides a number of built-in mechanisms to provide this introspection:
Each of these has its own strengths and suitabilities for use within DPDK components.
-Below are some guidelines for when each should be used:
+Here are guidelines for when each mechanism should be used:
* For reporting error conditions, or other abnormal runtime issues, *logging* should be used.
- Depending on the severity of the issue, the appropriate log level, for example,
- ``ERROR``, ``WARNING`` or ``NOTICE``, should be used.
+ For example, depending on the severity of the issue, the appropriate log level,
+ ``ERROR``, ``WARNING`` or ``NOTICE`` should be used.
.. note::
@@ -96,22 +91,21 @@ Below are some guidelines for when each should be used:
* For component initialization, or other cases where a path through the code
is only likely to be taken once,
- either *logging* at ``DEBUG`` level or *tracing* may be used, or potentially both.
+ either *logging* at ``DEBUG`` level or *tracing* may be used, or both.
In the latter case, tracing can provide basic information as to the code path taken,
with debug-level logging providing additional details on internal state,
- not possible to emit via tracing.
+ which is not possible to emit via tracing.
* For a component's data-path, where a path is to be taken multiple times within a short timeframe,
*tracing* should be used.
Since DPDK tracing uses `Common Trace Format <https://diamon.org/ctf/>`_ for its tracing logs,
post-analysis can be done using a range of external tools.
-* For numerical or statistical data generated by a component, for example, per-packet statistics,
+* For numerical or statistical data generated by a component, such as per-packet statistics,
*telemetry* should be used.
-* For any data where the data may need to be gathered at any point in the execution
- to help assess the state of the application component,
- for example, core configuration, device information, *telemetry* should be used.
+* For any data that may need to be gathered at any point during the execution
+ to help assess the state of the application component (for example, core configuration, device information) *telemetry* should be used.
Telemetry callbacks should not modify any program state, but be "read-only".
Many libraries also include a ``rte_<libname>_dump()`` function as part of their API,
@@ -135,13 +129,12 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Having runtime support for enabling/disabling library statistics is recommended,
-as build-time options should be avoided. However, if build-time options are used,
-for example as in the table library, the options can be set using c_args.
-When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended
+as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
-are collected for any instance of any object type provided by the library:
+are collected for any instance of any object type provided by the library.
Prevention of ABI changes due to library statistics support
@@ -165,8 +158,8 @@ Motivation to allow the application to turn library statistics on and off
It is highly recommended that each library provides statistics counters to allow
an application to monitor the library-level run-time events. Typical counters
-are: number of packets received/dropped/transmitted, number of buffers
-allocated/freed, number of occurrences for specific events, etc.
+are: the number of packets received/dropped/transmitted, the number of buffers
+allocated/freed, the number of occurrences for specific events, etc.
However, the resources consumed for library-level statistics counter collection
have to be spent out of the application budget and the counters collected by
@@ -229,5 +222,5 @@ Developers should work with the Linux Kernel community to get the required
functionality upstream. PF functionality should only be added to DPDK for
testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
-upstreamable then a case can be made to maintain the PF functionality in DPDK
+upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 13be715933..0569c5cae6 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -99,7 +99,7 @@ e.g. :doc:`../nics/index`
Running DPDK Applications
-------------------------
-To run a DPDK application, some customization may be required on the target machine.
+To run a DPDK application, customization may be required on the target machine.
System Software
~~~~~~~~~~~~~~~
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 2/9] doc: reword pmd section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
2024-05-13 15:59 ` [PATCH 1/9] doc: reword design section in contributors guidelines Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
` (7 subsequent siblings)
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
made small edits to section 15.1 and 15.5
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/poll_mode_drv.rst | 151 ++++++++++++------------
1 file changed, 73 insertions(+), 78 deletions(-)
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 5008b41c60..360af20900 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -6,25 +6,24 @@
Poll Mode Driver
================
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit, 40 Gigabit and para virtualized virtio Poll Mode Drivers.
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver running in user space) to configure the devices and their respective queues.
In addition, a PMD accesses the RX and TX descriptors directly without any interrupts
(with the exception of Link Status Change interrupts) to quickly receive,
process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and a generic external API for the Ethernet PMDs.
Requirements and Assumptions
----------------------------
The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
-* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
- Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
+* In the *run-to-completion* model, a specific port's Rx descriptor ring is polled for packets through an API.
+ Packets are then processed on the same core and placed on a port's Tx descriptor ring through an API for transmission.
-* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API.
+* In the *pipe-line* model, one core polls one or more port's Rx descriptor ring through an API.
Packets are received and passed to another core via a ring.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
@@ -50,14 +49,14 @@ The loop for packet processing includes the following steps:
* Retrieve the received packet from the packet queue
-* Process the received packet, up to its retransmission if forwarded
+* Process the received packet up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -101,9 +100,9 @@ However, an rte_eth_tx_burst function is effectively implemented by the PMD to m
* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
-Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are extensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
replenishing multiple descriptors of the receive ring.
Logical Cores, Memory and NIC Queues Relationships
@@ -111,7 +110,7 @@ Logical Cores, Memory and NIC Queues Relationships
The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
-The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
+The buffers should, if possible, remain on the local processor to obtain the best performance results and Rx and Tx buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
@@ -120,12 +119,11 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
-concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
+concurrently on the same Tx queue without an SW lock. This PMD feature found in some NICs is useful for:
-* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
+* Removing explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
-* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
- enables more scaling as all workers can send the packets.
+* Enabling greater scalability by removing the requirement to have a dedicated Tx core
See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
@@ -135,8 +133,8 @@ Device Identification, Ownership and Configuration
Device Identification
~~~~~~~~~~~~~~~~~~~~~
-Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
+Each NIC port is uniquely designated by its PCI
+identifiers (bus/bridge, device, function) assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
* A port index used to designate the NIC port in all functions exported by the PMD API.
@@ -149,14 +147,13 @@ Port Ownership
The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
-It prevents Ethernet ports to be managed by different entities.
+This prevents Ethernet ports from being managed by different entities.
.. note::
- It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
+ It is the DPDK entity responsibility to set the port owner before using the port and to manage the port usage synchronization between different threads or processes.
-It is recommended to set port ownership early,
-like during the probing notification ``RTE_ETH_EVENT_NEW``.
+It is recommended to set port ownership early. For instance, during the probing notification ``RTE_ETH_EVENT_NEW``.
Device Configuration
~~~~~~~~~~~~~~~~~~~~
@@ -165,7 +162,7 @@ The configuration of each NIC port includes the following operations:
* Allocate PCI resources
-* Reset the hardware (issue a Global Reset) to a well-known default state
+* Reset the hardware to a well-known default state (issue a Global Reset)
* Set up the PHY and the link
@@ -174,7 +171,7 @@ The configuration of each NIC port includes the following operations:
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
-This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
+This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -210,7 +207,7 @@ Each transmit queue is independently configured with the following information:
* The *minimum* transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
- A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
+ A value of 0 can be passed during the Tx queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
@@ -222,7 +219,7 @@ Each transmit queue is independently configured with the following information:
A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
The default value for tx_rs_thresh is 32.
This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
- This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
+ This saves upstream PCIe* bandwidth resulting from Tx descriptor write-backs.
It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
@@ -244,7 +241,7 @@ One descriptor in the TX ring is used as a sentinel to avoid a hardware race con
.. note::
- When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
+ When configuring for DCB operation at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
Free Tx mbuf on Demand
~~~~~~~~~~~~~~~~~~~~~~
@@ -265,7 +262,7 @@ There are two scenarios when an application may want the mbuf released immediate
One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API
until the reference count on the packet is decremented.
- Then the same packet can be transmitted to the next destination interface.
+ Then, the same packet can be transmitted to the next destination interface.
The application is still responsible for managing any packet manipulations needed
between the different destination interfaces, but a packet copy can be avoided.
This API is independent of whether the packet was transmitted or dropped,
@@ -288,13 +285,13 @@ Hardware Offload
Depending on driver capabilities advertised by
``rte_eth_dev_info_get()``, the PMD may support hardware offloading
feature like checksumming, TCP segmentation, VLAN insertion or
-lockfree multithreaded TX burst on the same TX queue.
+lockfree multithreaded Tx burst on the same Tx queue.
The support of these offload features implies the addition of dedicated
status bit(s) and value field(s) into the rte_mbuf data structure, along
with their appropriate handling by the receive/transmit functions
exported by each PMD. The list of flags and their precise meaning is
-described in the mbuf API documentation and in the in :ref:`Mbuf Library
+described in the mbuf API documentation and in the :ref:`Mbuf Library
<Mbuf_Library>`, section "Meta Information".
Per-Port and Per-Queue Offloads
@@ -303,14 +300,14 @@ Per-Port and Per-Queue Offloads
In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:
* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offload is the one supported by device but not per-queue type.
-* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
+* A pure per-port offload is supported by a device but not per-queue type.
+* A pure per-port offloading cannot be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offloading must be enabled or disabled on all queues at the same time.
-* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
+* Offloading is per-queue or pure per-port type, but cannot be both types on the same devices.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
* Any supported offloading can be enabled on all queues.
-The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+The different offload capabilities can be queried using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
@@ -329,8 +326,8 @@ per-port type and no matter whether it is set or cleared in
If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
-is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
-in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
+is the one that hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
+in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise an error log will be triggered.
Poll Mode Driver API
--------------------
@@ -340,8 +337,8 @@ Generalities
By default, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
-For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
-Of course, this function can be invoked in parallel by different logical cores on different RX queues.
+For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same Rx queue of the same port.
+This function can be invoked in parallel by different logical cores on different Rx queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
@@ -357,7 +354,7 @@ The rte_mbuf data structure includes specific fields to represent, in a generic
For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
-The mbuf structure is fully described in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
+The mbuf structure is described in depth in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
@@ -370,12 +367,12 @@ Ethernet Device Standard Device Arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Standard Ethernet device arguments allow for a set of commonly used arguments/
-parameters which are applicable to all Ethernet devices to be available to for
-specification of specific device and for passing common configuration
+parameters applicable to all Ethernet devices. These arguments/parameters are available for
+specification of specific devices and passing common configuration
parameters to those ports.
-* ``representor`` for a device which supports the creation of representor ports
- this argument allows user to specify which switch ports to enable port
+* Use ``representor`` for a device which supports the creation of representor ports.
+ This argument allows user to specify which switch ports to enable port
representors for::
-a DBDF,representor=vf0
@@ -392,7 +389,7 @@ parameters to those ports.
-a DBDF,representor=[pf[0-1],pf2vf[0-2],pf3[3,5-8]]
(Multiple representors in one device argument can be represented as a list)
-Note: PMDs are not required to support the standard device arguments and users
+Note: PMDs are not required to support the standard device arguments. Users
should consult the relevant PMD documentation to see support devargs.
Extended Statistics API
@@ -402,9 +399,9 @@ The extended statistics API allows a PMD to expose all statistics that are
available to it, including statistics that are unique to the device.
Each statistic has three properties ``name``, ``id`` and ``value``:
-* ``name``: A human readable string formatted by the scheme detailed below.
+* ``name``: A human-readable string formatted by the scheme detailed below.
* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+* ``value``: An unsigned 64-bit integer that is the value of the statistic.
Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
@@ -439,7 +436,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc., are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -466,8 +463,8 @@ lookup of specific statistics. Performant lookup means two things;
The API ensures these requirements are met by mapping the ``name`` of the
statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
+PMD only performs the required calculations. The expected usage is that the
+application scans the ``name`` of each statistic and caches the ``id``
if it has an interest in that statistic. On the fast-path, the integer can be used
to retrieve the actual ``value`` of the statistic that the ``id`` represents.
@@ -486,7 +483,7 @@ statistics.
* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
+ returns all available statistics.
Application Usage
@@ -496,10 +493,10 @@ Imagine an application that wants to view the dropped packet count. If no
packets are dropped, the application does not read any other metrics for
performance reasons. If packets are dropped, the application has a particular
set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
+decide what next steps to perform. The following code snippets show how the
xstats API can be used to achieve this goal.
-First step is to get all statistics names and list them:
+The first step is to get all statistics names and list them:
.. code-block:: c
@@ -545,7 +542,7 @@ First step is to get all statistics names and list them:
The application has access to the names of all of the statistics that the PMD
exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
+IDs of those statistics by looking up the name as follows:
.. code-block:: c
@@ -564,8 +561,7 @@ ids of those statistics by looking up the name as follows:
The API provides flexibility to the application so that it can look up multiple
statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
+function call overhead of retrieving statistics and simplifies the application's lookup of multiple statistics.
.. code-block:: c
@@ -585,8 +581,8 @@ statistics simpler for the application.
This array lookup API for xstats allows the application create multiple
"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
+call. As an end result, the application can achieve its goal of
+monitoring a single statistic (in this case,"rx_errors"). If that shows
packets being dropped, it can easily retrieve a "set" of statistics using the
IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
@@ -597,23 +593,23 @@ NIC Reset API
int rte_eth_dev_reset(uint16_t port_id);
-Sometimes a port has to be reset passively. For example when a PF is
+There are times when a port has to be reset passively. For example, when a PF is
reset, all its VFs should also be reset by the application to make them
-consistent with the PF. A DPDK application also can call this function
-to trigger a port reset. Normally, a DPDK application would invokes this
+consistent with the PF. A DPDK application can also call this function
+to trigger a port reset. Normally, a DPDK application would invoke this
function when an RTE_ETH_EVENT_INTR_RESET event is detected.
-It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and
-the application should register a callback function to handle these
+The PMD's duty is to trigger RTE_ETH_EVENT_INTR_RESET events.
+The application should register a callback function to handle these
events. When a PMD needs to trigger a reset, it can trigger an
RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET
-event, applications can handle it as follows: Stop working queues, stop
+event, applications can do as follows: Stop working queues, stop
calling Rx and Tx functions, and then call rte_eth_dev_reset(). For
thread safety all these operations should be called from the same thread.
For example when PF is reset, the PF sends a message to notify VFs of
-this event and also trigger an interrupt to VFs. Then in the interrupt
-service routine the VFs detects this notification message and calls
+this event and also trigger an interrupt to VFs. Then, in the interrupt
+service routine, the VFs detects this notification message and calls
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL).
This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET
event within VFs. The function rte_eth_dev_callback_process() will
@@ -621,13 +617,12 @@ call the registered callback function. The callback function can trigger
the application to handle all operations the VF reset requires including
stopping Rx/Tx queues and calling rte_eth_dev_reset().
-The rte_eth_dev_reset() itself is a generic function which only does
-some hardware reset operations through calling dev_unint() and
-dev_init(), and itself does not handle synchronization, which is handled
+The rte_eth_dev_reset() is a generic function that only does hardware reset operations through calling dev_unint() and
+dev_init(). It does not handle synchronization, which is handled
by application.
The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
-the application to handle reset event. It is duty of application to
+the application to handle reset event. It is duty of the application to
handle all synchronization before it calls rte_eth_dev_reset().
The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
@@ -635,15 +630,15 @@ The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
Proactive Error Handling Mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``,
-different from the application invokes recovery in PASSIVE mode,
-the PMD automatically recovers from error in PROACTIVE mode,
+This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, which is
+different from the application invokes recovery in PASSIVE mode.
+The PMD automatically recovers from error in PROACTIVE mode,
and only a small amount of work is required for the application.
During error detection and automatic recovery,
the PMD sets the data path pointers to dummy functions
(which will prevent the crash),
-and also make sure the control path operations fail with a return code ``-EBUSY``.
+and ensures sure the control path operations fail with a return code ``-EBUSY``.
Because the PMD recovers automatically,
the application can only sense that the data flow is disconnected for a while
@@ -655,9 +650,9 @@ three events are available:
``RTE_ETH_EVENT_ERR_RECOVERING``
Notify the application that an error is detected
- and the recovery is being started.
+ and the recovery is beginning.
Upon receiving the event, the application should not invoke
- any control path function until receiving
+ any control path function until receiving the
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event.
.. note::
@@ -667,7 +662,7 @@ three events are available:
because a larger error may occur during the recovery.
``RTE_ETH_EVENT_RECOVERY_SUCCESS``
- Notify the application that the recovery from error is successful,
+ Notify the application that the recovery from the error was successful,
the PMD already re-configures the port,
and the effect is the same as a restart operation.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 3/9] doc: reword argparse section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
2024-05-13 15:59 ` [PATCH 1/9] doc: reword design section in contributors guidelines Nandini Persad
2024-05-13 15:59 ` [PATCH 2/9] doc: reword pmd section in prog guide Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 19:01 ` Stephen Hemminger
2024-05-13 15:59 ` [PATCH 4/9] doc: reword service cores " Nandini Persad
` (6 subsequent siblings)
9 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
made small edits to sections 6.1 and 6.2 intro
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/argparse_lib.rst | 72 +++++++++++++-------------
1 file changed, 35 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index a6ac11b1c0..a2af7d49e9 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -4,22 +4,21 @@
Argparse Library
================
-The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+The argparse library provides argument parsing functionality and makes it easy to write user-friendly command-line programming.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Supports parsing of optional arguments (which can contain no-value,
+ required-value and optional-values).
-- Support parsing positional argument (which must take with required-value).
+- Supports parsing of positional arguments (which must contain required-values).
-- Support automatic generate usage information.
+- Supports automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Provides issue errors when an argument is invalid
-- Support parsing argument by two ways:
+- Supports parsing arguments in two ways:
#. autosave: used for parsing known value types;
#. callback: will invoke user callback to parse.
@@ -27,7 +26,7 @@ Features and Capabilities
Usage Guide
-----------
-The following code demonstrates how to use:
+The following code demonstrates how to use the following:
.. code-block:: C
@@ -89,12 +88,12 @@ The following code demonstrates how to use:
...
}
-In this example, the arguments which start with a hyphen (-) are optional
-arguments (they're "--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff"); and the
-arguments which don't start with a hyphen (-) are positional arguments
-(they're "ooo"/"ppp").
+In this example, the arguments thhat start with a hyphen (-) are optional
+arguments ("--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff").
+The arguments that do not start with a hyphen (-) are positional arguments
+("ooo"/"ppp").
-Every argument must be set whether to carry a value (one of
+Every argument must set whether it carries a value (one of
``RTE_ARGPARSE_ARG_NO_VALUE``, ``RTE_ARGPARSE_ARG_REQUIRED_VALUE`` and
``RTE_ARGPARSE_ARG_OPTIONAL_VALUE``).
@@ -105,26 +104,26 @@ Every argument must be set whether to carry a value (one of
User Input Requirements
~~~~~~~~~~~~~~~~~~~~~~~
-For optional arguments which take no-value,
+For optional arguments which have no-value,
the following mode is supported (take above "--aaa" as an example):
- The single mode: "--aaa" or "-a".
-For optional arguments which take required-value,
+For optional arguments which have required-value,
the following two modes are supported (take above "--bbb" as an example):
- The kv mode: "--bbb=1234" or "-b=1234".
- The split mode: "--bbb 1234" or "-b 1234".
-For optional arguments which take optional-value,
+For optional arguments which have optional-value,
the following two modes are supported (take above "--ccc" as an example):
- The single mode: "--ccc" or "-c".
- The kv mode: "--ccc=123" or "-c=123".
-For positional arguments which must take required-value,
+For positional arguments which must have required-value,
their values are parsing in the order defined.
.. note::
@@ -132,15 +131,15 @@ their values are parsing in the order defined.
The compact mode is not supported.
Take above "-a" and "-d" as an example, don't support "-ad" input.
-Parsing by autosave way
+Parsing the Autosave Method
~~~~~~~~~~~~~~~~~~~~~~~
-Argument of known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
-could be parsed using this autosave way,
-and its result will save in the ``val_saver`` field.
+Arguments of a known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
+can be parsed using the autosave method,
+The result will save in the ``val_saver`` field.
In the above example, the arguments "--aaa"/"--bbb"/"--ccc" and "ooo"
-both use this way, the parsing is as follows:
+both use this method. The parsing is as follows:
- For argument "--aaa", it is configured as no-value,
so the ``aaa_val`` will be set to ``val_set`` field
@@ -150,28 +149,27 @@ both use this way, the parsing is as follows:
so the ``bbb_val`` will be set to user input's value
(e.g. will be set to 1234 with input "--bbb 1234").
-- For argument "--ccc", it is configured as optional-value,
- if user only input "--ccc" then the ``ccc_val`` will be set to ``val_set`` field
+- For argument "--ccc", it is configured as optional-value.
+ If user only input "--ccc", then the ``ccc_val`` will be set to ``val_set`` field
which is 200 in the above example;
- if user input "--ccc=123", then the ``ccc_val`` will be set to 123.
+ If user input "--ccc=123", then the ``ccc_val`` will be set to 123.
- For argument "ooo", it is positional argument,
the ``ooo_val`` will be set to user input's value.
-Parsing by callback way
-~~~~~~~~~~~~~~~~~~~~~~~
-
-It could also choose to use callback to parse,
-just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+Parsing by Callback Method
+~~~~~~~~~
+You may choose to use the callback method to parse.
+To do so, define a unique index for the argument
+and make the ``val_save`` field to be NULL as a zero value-type.
-In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this way.
+In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this method.
-Multiple times argument
+Multiple Times Argument
~~~~~~~~~~~~~~~~~~~~~~~
-If want to support the ability to enter the same argument multiple times,
-then should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
+If you want to support the ability to enter the same argument multiple times,
+then you should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
For example:
.. code-block:: C
@@ -182,5 +180,5 @@ Then the user input could contain multiple "--xyz" arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 3/9] doc: reword argparse section in prog guide
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
@ 2024-05-13 19:01 ` Stephen Hemminger
0 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2024-05-13 19:01 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Mon, 13 May 2024 08:59:05 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> @@ -132,15 +131,15 @@ their values are parsing in the order defined.
> The compact mode is not supported.
> Take above "-a" and "-d" as an example, don't support "-ad" input.
>
> -Parsing by autosave way
> +Parsing the Autosave Method
> ~~~~~~~~~~~~~~~~~~~~~~~
RST is picky about the title format.
If you build the docs, this change will generate a warning:
ninja -C build doc
ninja: Entering directory `build'
[4/5] Generating doc/guides/html_guides with a custom command
/home/shemminger/DPDK/doc-rework/doc/guides/contributing/design.rst:41: WARNING: Bullet list ends without a blank line; unexpected unindent.
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:135: WARNING: Title underline too short.
Parsing the Autosave Method
~~~~~~~~~~~~~~~~~~~~~~~
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:135: WARNING: Title underline too short.
Parsing the Autosave Method
~~~~~~~~~~~~~~~~~~~~~~~
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:161: WARNING: Title underline too short.
Parsing by Callback Method
~~~~~~~~~
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:161: WARNING: Title underline too short.
Parsing by Callback Method
~~~~~~~~~
[4/5] Running external command doc (wrapped by meson to set env)
Building docs: Doxygen_API(HTML) Doxygen_API(Manpage) HTML_Guides
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 4/9] doc: reword service cores section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (2 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 5/9] doc: reword trace library " Nandini Persad
` (5 subsequent siblings)
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Harry van Haaren
made minor syntax changes to section 8 of programmer's guide, service cores
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/service_cores.rst | 32 ++++++++++++-------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index d4e6c3d6e6..59da3964bf 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,38 +4,38 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control how the service
cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
-between service cores and services can be configured to abstract away the
+between service cores and services can be configured to simplify the
difference between platforms and environments.
-For example, the Eventdev has hardware and software PMDs. Of these the software
+For example, the Eventdev has hardware and software PMDs. Of these the software,
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: by
using the service coremask, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-s` coremask argument to EAL, which will
-take any cores available in the main DPDK coremask, and if the bits are also set
-in the service coremask the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-s` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 5/9] doc: reword trace library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (3 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 4/9] doc: reword service cores " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 6/9] doc: reword log " Nandini Persad
` (4 subsequent siblings)
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Jerin Jacob, Sunil Kumar Kori
made minor syntax edits to sect 9.1-9.7 of prog guide
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/trace_lib.rst | 50 ++++++++++++++---------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..e2983017d8 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 6/9] doc: reword log library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (4 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 5/9] doc: reword trace library " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 7/9] doc: reword cmdline " Nandini Persad
` (3 subsequent siblings)
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Jerin Jacob, Sunil Kumar Kori
minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 24 +++++++++++-----------
doc/guides/prog_guide/log_lib.rst | 32 ++++++++++++++---------------
doc/guides/prog_guide/trace_lib.rst | 22 ++++++++++----------
3 files changed, 39 insertions(+), 39 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@ Command-line Library
====================
Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,7 +18,7 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -5,7 +5,7 @@ Log Library
===========
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
-By default, in a Linux application, logs are sent to syslog and also to the console.
+By default, in a Linux application, logs are sent to syslog and the console.
On FreeBSD and Windows applications, logs are sent only to the console.
However, the log function can be overridden by the user to use a different logging mechanism.
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, you can achieve the same result using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
-As well as the log message, ``rte_log()`` takes two additional parameters:
+To output log messages, the ``rte_log()`` API function should be used,
+as well as the log message, ``rte_log()`` which takes two additional parameters:
* The log level
* The log component type
@@ -73,16 +73,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -97,10 +97,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e2983017d8..4177f8ba15 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. Here's an example of how you grep the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. Below is an example of counting the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -210,14 +210,14 @@ Use the tracecompass GUI tool
``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
a graphical view of events. Like ``babeltrace``, tracecompass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``tracecompass``, the following are the minimum required steps:
- Install ``tracecompass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
- Launch ``tracecompass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
For more details, refer
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
@@ -225,7 +225,7 @@ For more details, refer
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,8 +238,8 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
+As the DPDK trace library is designed to generate traces that use the ``Common Trace
+Format (CTF)``. ``CTF`` specification and consists of the following units to create
a trace.
- ``Stream`` Sequence of packets.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -277,7 +277,7 @@ per thread to enable lock less trace-emit function.
For non lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
+For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known of DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits. It consists of 48 bits of timestamp and 16 bits
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 7/9] doc: reword cmdline section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (5 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 6/9] doc: reword log " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 8/9] doc: reword stack library " Nandini Persad
` (2 subsequent siblings)
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
minor syntax edits
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 34 +++++++++++++++----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index 6b10ab6c99..8aa1ef180b 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -62,7 +62,7 @@ The format of the list file must follow these requirements:
* One command per line
-* Variable fields are prefixed by the type-name in angle-brackets, for example:
+* Variable fields are prefixed by the type-name in angle-brackets. For example:
* ``<STRING>message``
@@ -75,7 +75,7 @@ The format of the list file must follow these requirements:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than by the type name.
+ have the comma-separated option list placed in braces rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -112,7 +112,7 @@ The generated content includes:
* A command-line context array definition, suitable for passing to ``cmdline_new``
-If so desired, the script can also output function stubs for the callback functions for each command.
+If needed, the script can also output function stubs for the callback functions for each command.
This behaviour is triggered by passing the ``--stubs`` flag to the script.
In this case, an output file must be provided with a filename ending in ".h",
and the callback stubs will be written to an equivalent ".c" file.
@@ -120,7 +120,7 @@ and the callback stubs will be written to an equivalent ".c" file.
.. note::
The stubs are written to a separate file,
- to allow continuous use of the script to regenerate the command-line header,
+ to allow continuous use of the script to regenerate the command-line header
without overwriting any code the user has added to the callback functions.
This makes it easy to incrementally add new commands to an existing application.
@@ -154,7 +154,7 @@ the callback functions would be:
These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
-The same "cmdname" stem is used in the naming of the generated structures too.
+The same "cmdname" stem is used in the naming of the generated structures as well.
To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -176,13 +176,12 @@ Integrating with the Application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To integrate the script output with the application,
-we must ``#include`` the generated header into our applications C file,
+we must ``#include`` the generated header into our application's C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
``ctx`` by default (Modifiable via script parameter).
-The callback functions may be in this same file, or in a separate one -
-they just need to be available to the linker at build-time.
+The callback functions may be in the same or separate file, as long as they are available to the linker at build-time.
Limitations of the Script Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -242,19 +241,19 @@ The resulting struct looks like:
As before, we choose names to match the tokens in the command.
Since our numeric parameter is a 16-bit value, we use ``uint16_t`` type for it.
-Any of the standard sized integer types can be used as parameters, depending on the desired result.
+Any of the standard-sized integer types can be used as parameters depending on the desired result.
Beyond the standard integer types,
-the library also allows variable parameters to be of a number of other types,
+the library also allows variable parameters to be of a number of other types
as called out in the feature list above.
* For variable string parameters,
the type should be ``cmdline_fixed_string_t`` - the same as for fixed tokens,
but these will be initialized differently (as described below).
-* For ethernet addresses use type ``struct rte_ether_addr``
+* For ethernet addresses, use type ``struct rte_ether_addr``
-* For IP addresses use type ``cmdline_ipaddr_t``
+* For IP addresses, use type ``cmdline_ipaddr_t``
Providing Field Initializers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -267,6 +266,7 @@ For fixed string tokens, like "quit", "show" and "port", the initializer will be
static cmdline_parse_token_string_t cmd_quit_quit_tok =
TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "quit");
+
The convention for naming used here is to include the base name of the overall result structure -
``cmd_quit`` in this case,
as well as the name of the field within that structure - ``quit`` in this case, followed by ``_tok``.
@@ -311,8 +311,8 @@ The callback function should have type:
where the first parameter is a pointer to the result structure defined above,
the second parameter is the command-line instance,
and the final parameter is a user-defined pointer provided when we associate the callback with the command.
-Most callback functions only use the first parameter, or none at all,
-but the additional two parameters provide some extra flexibility,
+Most callback functions only use the first parameter or none at all,
+but the additional two parameters provide some extra flexibility
to allow the callback to work with non-global state in your application.
For our two example commands, the relevant callback functions would look very similar in definition.
@@ -341,7 +341,7 @@ Associating Callback and Command
The ``cmdline_parse_inst_t`` type defines a "parse instance",
i.e. a sequence of tokens to be matched and then an associated function to be called.
-Also included in the instance type are a field for help text for the command,
+Also included in the instance type are a field for help text for the command
and any additional user-defined parameter to be passed to the callback functions referenced above.
For example, for our simple "quit" command:
@@ -362,8 +362,8 @@ then set the user-defined parameter to NULL,
provide a help message to be given, on request, to the user explaining the command,
before finally listing out the single token to be matched for this command instance.
-For our second, port stats, example,
-as well as making things a little more complicated by having multiple tokens to be matched,
+For our second "port stats" example,
+as well as making things more complex by having multiple tokens to be matched,
we can also demonstrate passing in a parameter to the function.
Let us suppose that our application does not always use all the ports available to it,
but instead only uses a subset of the ports, stored in an array called ``active_ports``.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 8/9] doc: reword stack library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (6 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 7/9] doc: reword cmdline " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 9/9] doc: reword rcu " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
minor change made to wording
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/stack_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..a51df60d13 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -44,8 +44,8 @@ Lock-free Stack
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 9/9] doc: reword rcu library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (7 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 8/9] doc: reword stack library " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
9 siblings, 0 replies; 30+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Honnappa Nagarahalli
simple syntax changes made to rcu library sectionin programmer's guide
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/rcu_lib.rst | 77 ++++++++++++++++---------------
1 file changed, 40 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index d0aef3bc16..c7ae349184 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -8,17 +8,17 @@ RCU Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory. An example of this is an index of a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -64,19 +64,19 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical sections for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
@@ -119,14 +119,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -134,13 +134,13 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
+calls (for example, while using eventdev APIs). The writer thread should not
wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
+call ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
@@ -149,13 +149,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -173,7 +173,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -181,9 +181,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -203,40 +203,43 @@ the application. When a writer deletes an entry from a data structure, the write
There are several APIs provided to help with this process. The writer
can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing.
+It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms.
+
+The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
#. Initialization
#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Reclaiming Resources*
#. Shutdown
The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
+The application must handle 'Initialization' and 'Quiescent State Reporting'. Therefore, the application:
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
+* Must create the RCU variable and register the reader threads to report their quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-The client library will handle 'Reclaiming Resources' part of the process. The
+The client library will handle the 'Reclaiming Resources' part of the process. The
client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
+reclamation algorithm. So, the client library should:
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
+* Provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
The 'Shutdown' process needs to be shared between the application and the
-client library.
+client library. Note that:
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
+* The application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+* The client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 1/9] doc: reword pmd section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (8 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 9/9] doc: reword rcu " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
` (8 more replies)
9 siblings, 9 replies; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
I made edits for syntax/grammar the PMD section of the prog guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/poll_mode_drv.rst | 151 ++++++++++++------------
1 file changed, 73 insertions(+), 78 deletions(-)
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 5008b41c60..360af20900 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -6,25 +6,24 @@
Poll Mode Driver
================
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit, 40 Gigabit and para virtualized virtio Poll Mode Drivers.
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver running in user space) to configure the devices and their respective queues.
In addition, a PMD accesses the RX and TX descriptors directly without any interrupts
(with the exception of Link Status Change interrupts) to quickly receive,
process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and a generic external API for the Ethernet PMDs.
Requirements and Assumptions
----------------------------
The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
-* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
- Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
+* In the *run-to-completion* model, a specific port's Rx descriptor ring is polled for packets through an API.
+ Packets are then processed on the same core and placed on a port's Tx descriptor ring through an API for transmission.
-* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API.
+* In the *pipe-line* model, one core polls one or more port's Rx descriptor ring through an API.
Packets are received and passed to another core via a ring.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
@@ -50,14 +49,14 @@ The loop for packet processing includes the following steps:
* Retrieve the received packet from the packet queue
-* Process the received packet, up to its retransmission if forwarded
+* Process the received packet up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -101,9 +100,9 @@ However, an rte_eth_tx_burst function is effectively implemented by the PMD to m
* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
-Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are extensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
replenishing multiple descriptors of the receive ring.
Logical Cores, Memory and NIC Queues Relationships
@@ -111,7 +110,7 @@ Logical Cores, Memory and NIC Queues Relationships
The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
-The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
+The buffers should, if possible, remain on the local processor to obtain the best performance results and Rx and Tx buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
@@ -120,12 +119,11 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
-concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
+concurrently on the same Tx queue without an SW lock. This PMD feature found in some NICs is useful for:
-* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
+* Removing explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
-* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
- enables more scaling as all workers can send the packets.
+* Enabling greater scalability by removing the requirement to have a dedicated Tx core
See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
@@ -135,8 +133,8 @@ Device Identification, Ownership and Configuration
Device Identification
~~~~~~~~~~~~~~~~~~~~~
-Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
+Each NIC port is uniquely designated by its PCI
+identifiers (bus/bridge, device, function) assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
* A port index used to designate the NIC port in all functions exported by the PMD API.
@@ -149,14 +147,13 @@ Port Ownership
The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
-It prevents Ethernet ports to be managed by different entities.
+This prevents Ethernet ports from being managed by different entities.
.. note::
- It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
+ It is the DPDK entity responsibility to set the port owner before using the port and to manage the port usage synchronization between different threads or processes.
-It is recommended to set port ownership early,
-like during the probing notification ``RTE_ETH_EVENT_NEW``.
+It is recommended to set port ownership early. For instance, during the probing notification ``RTE_ETH_EVENT_NEW``.
Device Configuration
~~~~~~~~~~~~~~~~~~~~
@@ -165,7 +162,7 @@ The configuration of each NIC port includes the following operations:
* Allocate PCI resources
-* Reset the hardware (issue a Global Reset) to a well-known default state
+* Reset the hardware to a well-known default state (issue a Global Reset)
* Set up the PHY and the link
@@ -174,7 +171,7 @@ The configuration of each NIC port includes the following operations:
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
-This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
+This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -210,7 +207,7 @@ Each transmit queue is independently configured with the following information:
* The *minimum* transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
- A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
+ A value of 0 can be passed during the Tx queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
@@ -222,7 +219,7 @@ Each transmit queue is independently configured with the following information:
A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
The default value for tx_rs_thresh is 32.
This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
- This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
+ This saves upstream PCIe* bandwidth resulting from Tx descriptor write-backs.
It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
@@ -244,7 +241,7 @@ One descriptor in the TX ring is used as a sentinel to avoid a hardware race con
.. note::
- When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
+ When configuring for DCB operation at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
Free Tx mbuf on Demand
~~~~~~~~~~~~~~~~~~~~~~
@@ -265,7 +262,7 @@ There are two scenarios when an application may want the mbuf released immediate
One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API
until the reference count on the packet is decremented.
- Then the same packet can be transmitted to the next destination interface.
+ Then, the same packet can be transmitted to the next destination interface.
The application is still responsible for managing any packet manipulations needed
between the different destination interfaces, but a packet copy can be avoided.
This API is independent of whether the packet was transmitted or dropped,
@@ -288,13 +285,13 @@ Hardware Offload
Depending on driver capabilities advertised by
``rte_eth_dev_info_get()``, the PMD may support hardware offloading
feature like checksumming, TCP segmentation, VLAN insertion or
-lockfree multithreaded TX burst on the same TX queue.
+lockfree multithreaded Tx burst on the same Tx queue.
The support of these offload features implies the addition of dedicated
status bit(s) and value field(s) into the rte_mbuf data structure, along
with their appropriate handling by the receive/transmit functions
exported by each PMD. The list of flags and their precise meaning is
-described in the mbuf API documentation and in the in :ref:`Mbuf Library
+described in the mbuf API documentation and in the :ref:`Mbuf Library
<Mbuf_Library>`, section "Meta Information".
Per-Port and Per-Queue Offloads
@@ -303,14 +300,14 @@ Per-Port and Per-Queue Offloads
In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:
* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offload is the one supported by device but not per-queue type.
-* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
+* A pure per-port offload is supported by a device but not per-queue type.
+* A pure per-port offloading cannot be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offloading must be enabled or disabled on all queues at the same time.
-* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
+* Offloading is per-queue or pure per-port type, but cannot be both types on the same devices.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
* Any supported offloading can be enabled on all queues.
-The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+The different offload capabilities can be queried using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
@@ -329,8 +326,8 @@ per-port type and no matter whether it is set or cleared in
If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
-is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
-in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
+is the one that hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
+in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise an error log will be triggered.
Poll Mode Driver API
--------------------
@@ -340,8 +337,8 @@ Generalities
By default, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
-For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
-Of course, this function can be invoked in parallel by different logical cores on different RX queues.
+For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same Rx queue of the same port.
+This function can be invoked in parallel by different logical cores on different Rx queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
@@ -357,7 +354,7 @@ The rte_mbuf data structure includes specific fields to represent, in a generic
For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
-The mbuf structure is fully described in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
+The mbuf structure is described in depth in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
@@ -370,12 +367,12 @@ Ethernet Device Standard Device Arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Standard Ethernet device arguments allow for a set of commonly used arguments/
-parameters which are applicable to all Ethernet devices to be available to for
-specification of specific device and for passing common configuration
+parameters applicable to all Ethernet devices. These arguments/parameters are available for
+specification of specific devices and passing common configuration
parameters to those ports.
-* ``representor`` for a device which supports the creation of representor ports
- this argument allows user to specify which switch ports to enable port
+* Use ``representor`` for a device which supports the creation of representor ports.
+ This argument allows user to specify which switch ports to enable port
representors for::
-a DBDF,representor=vf0
@@ -392,7 +389,7 @@ parameters to those ports.
-a DBDF,representor=[pf[0-1],pf2vf[0-2],pf3[3,5-8]]
(Multiple representors in one device argument can be represented as a list)
-Note: PMDs are not required to support the standard device arguments and users
+Note: PMDs are not required to support the standard device arguments. Users
should consult the relevant PMD documentation to see support devargs.
Extended Statistics API
@@ -402,9 +399,9 @@ The extended statistics API allows a PMD to expose all statistics that are
available to it, including statistics that are unique to the device.
Each statistic has three properties ``name``, ``id`` and ``value``:
-* ``name``: A human readable string formatted by the scheme detailed below.
+* ``name``: A human-readable string formatted by the scheme detailed below.
* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+* ``value``: An unsigned 64-bit integer that is the value of the statistic.
Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
@@ -439,7 +436,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc., are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -466,8 +463,8 @@ lookup of specific statistics. Performant lookup means two things;
The API ensures these requirements are met by mapping the ``name`` of the
statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
+PMD only performs the required calculations. The expected usage is that the
+application scans the ``name`` of each statistic and caches the ``id``
if it has an interest in that statistic. On the fast-path, the integer can be used
to retrieve the actual ``value`` of the statistic that the ``id`` represents.
@@ -486,7 +483,7 @@ statistics.
* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
+ returns all available statistics.
Application Usage
@@ -496,10 +493,10 @@ Imagine an application that wants to view the dropped packet count. If no
packets are dropped, the application does not read any other metrics for
performance reasons. If packets are dropped, the application has a particular
set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
+decide what next steps to perform. The following code snippets show how the
xstats API can be used to achieve this goal.
-First step is to get all statistics names and list them:
+The first step is to get all statistics names and list them:
.. code-block:: c
@@ -545,7 +542,7 @@ First step is to get all statistics names and list them:
The application has access to the names of all of the statistics that the PMD
exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
+IDs of those statistics by looking up the name as follows:
.. code-block:: c
@@ -564,8 +561,7 @@ ids of those statistics by looking up the name as follows:
The API provides flexibility to the application so that it can look up multiple
statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
+function call overhead of retrieving statistics and simplifies the application's lookup of multiple statistics.
.. code-block:: c
@@ -585,8 +581,8 @@ statistics simpler for the application.
This array lookup API for xstats allows the application create multiple
"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
+call. As an end result, the application can achieve its goal of
+monitoring a single statistic (in this case,"rx_errors"). If that shows
packets being dropped, it can easily retrieve a "set" of statistics using the
IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
@@ -597,23 +593,23 @@ NIC Reset API
int rte_eth_dev_reset(uint16_t port_id);
-Sometimes a port has to be reset passively. For example when a PF is
+There are times when a port has to be reset passively. For example, when a PF is
reset, all its VFs should also be reset by the application to make them
-consistent with the PF. A DPDK application also can call this function
-to trigger a port reset. Normally, a DPDK application would invokes this
+consistent with the PF. A DPDK application can also call this function
+to trigger a port reset. Normally, a DPDK application would invoke this
function when an RTE_ETH_EVENT_INTR_RESET event is detected.
-It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and
-the application should register a callback function to handle these
+The PMD's duty is to trigger RTE_ETH_EVENT_INTR_RESET events.
+The application should register a callback function to handle these
events. When a PMD needs to trigger a reset, it can trigger an
RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET
-event, applications can handle it as follows: Stop working queues, stop
+event, applications can do as follows: Stop working queues, stop
calling Rx and Tx functions, and then call rte_eth_dev_reset(). For
thread safety all these operations should be called from the same thread.
For example when PF is reset, the PF sends a message to notify VFs of
-this event and also trigger an interrupt to VFs. Then in the interrupt
-service routine the VFs detects this notification message and calls
+this event and also trigger an interrupt to VFs. Then, in the interrupt
+service routine, the VFs detects this notification message and calls
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL).
This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET
event within VFs. The function rte_eth_dev_callback_process() will
@@ -621,13 +617,12 @@ call the registered callback function. The callback function can trigger
the application to handle all operations the VF reset requires including
stopping Rx/Tx queues and calling rte_eth_dev_reset().
-The rte_eth_dev_reset() itself is a generic function which only does
-some hardware reset operations through calling dev_unint() and
-dev_init(), and itself does not handle synchronization, which is handled
+The rte_eth_dev_reset() is a generic function that only does hardware reset operations through calling dev_unint() and
+dev_init(). It does not handle synchronization, which is handled
by application.
The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
-the application to handle reset event. It is duty of application to
+the application to handle reset event. It is duty of the application to
handle all synchronization before it calls rte_eth_dev_reset().
The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
@@ -635,15 +630,15 @@ The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
Proactive Error Handling Mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``,
-different from the application invokes recovery in PASSIVE mode,
-the PMD automatically recovers from error in PROACTIVE mode,
+This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, which is
+different from the application invokes recovery in PASSIVE mode.
+The PMD automatically recovers from error in PROACTIVE mode,
and only a small amount of work is required for the application.
During error detection and automatic recovery,
the PMD sets the data path pointers to dummy functions
(which will prevent the crash),
-and also make sure the control path operations fail with a return code ``-EBUSY``.
+and ensures sure the control path operations fail with a return code ``-EBUSY``.
Because the PMD recovers automatically,
the application can only sense that the data flow is disconnected for a while
@@ -655,9 +650,9 @@ three events are available:
``RTE_ETH_EVENT_ERR_RECOVERING``
Notify the application that an error is detected
- and the recovery is being started.
+ and the recovery is beginning.
Upon receiving the event, the application should not invoke
- any control path function until receiving
+ any control path function until receiving the
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event.
.. note::
@@ -667,7 +662,7 @@ three events are available:
because a larger error may occur during the recovery.
``RTE_ETH_EVENT_RECOVERY_SUCCESS``
- Notify the application that the recovery from error is successful,
+ Notify the application that the recovery from the error was successful,
the PMD already re-configures the port,
and the effect is the same as a restart operation.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 2/9] doc: reword argparse section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
` (7 subsequent siblings)
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
I have made small edits for syntax in this section.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/argparse_lib.rst | 75 +++++++++++++-------------
1 file changed, 38 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index a6ac11b1c0..1acde60861 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -4,30 +4,31 @@
Argparse Library
================
-The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+The argparse library provides argument parsing functionality and makes it easy to write user-friendly command-line programming.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Supports parsing of optional arguments (which can contain no-value,
+ required-value and optional-values).
-- Support parsing positional argument (which must take with required-value).
+- Supports parsing of positional arguments (which must contain required-values).
-- Support automatic generate usage information.
+- Supports automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Provides issue errors when an argument is invalid
+
+- Supports parsing arguments in two ways:
-- Support parsing argument by two ways:
#. autosave: used for parsing known value types;
#. callback: will invoke user callback to parse.
+
Usage Guide
-----------
-The following code demonstrates how to use:
+The following code demonstrates how to use the following:
.. code-block:: C
@@ -89,12 +90,12 @@ The following code demonstrates how to use:
...
}
-In this example, the arguments which start with a hyphen (-) are optional
-arguments (they're "--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff"); and the
-arguments which don't start with a hyphen (-) are positional arguments
-(they're "ooo"/"ppp").
+In this example, the arguments thhat start with a hyphen (-) are optional
+arguments ("--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff").
+The arguments that do not start with a hyphen (-) are positional arguments
+("ooo"/"ppp").
-Every argument must be set whether to carry a value (one of
+Every argument must set whether it carries a value (one of
``RTE_ARGPARSE_ARG_NO_VALUE``, ``RTE_ARGPARSE_ARG_REQUIRED_VALUE`` and
``RTE_ARGPARSE_ARG_OPTIONAL_VALUE``).
@@ -105,26 +106,26 @@ Every argument must be set whether to carry a value (one of
User Input Requirements
~~~~~~~~~~~~~~~~~~~~~~~
-For optional arguments which take no-value,
+For optional arguments which have no-value,
the following mode is supported (take above "--aaa" as an example):
- The single mode: "--aaa" or "-a".
-For optional arguments which take required-value,
+For optional arguments which have required-value,
the following two modes are supported (take above "--bbb" as an example):
- The kv mode: "--bbb=1234" or "-b=1234".
- The split mode: "--bbb 1234" or "-b 1234".
-For optional arguments which take optional-value,
+For optional arguments which have optional-value,
the following two modes are supported (take above "--ccc" as an example):
- The single mode: "--ccc" or "-c".
- The kv mode: "--ccc=123" or "-c=123".
-For positional arguments which must take required-value,
+For positional arguments which must have required-value,
their values are parsing in the order defined.
.. note::
@@ -132,15 +133,15 @@ their values are parsing in the order defined.
The compact mode is not supported.
Take above "-a" and "-d" as an example, don't support "-ad" input.
-Parsing by autosave way
-~~~~~~~~~~~~~~~~~~~~~~~
+Parsing the Autosave Method
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Argument of known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
-could be parsed using this autosave way,
-and its result will save in the ``val_saver`` field.
+Arguments of a known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
+can be parsed using the autosave method,
+The result will save in the ``val_saver`` field.
In the above example, the arguments "--aaa"/"--bbb"/"--ccc" and "ooo"
-both use this way, the parsing is as follows:
+both use this method. The parsing is as follows:
- For argument "--aaa", it is configured as no-value,
so the ``aaa_val`` will be set to ``val_set`` field
@@ -150,28 +151,28 @@ both use this way, the parsing is as follows:
so the ``bbb_val`` will be set to user input's value
(e.g. will be set to 1234 with input "--bbb 1234").
-- For argument "--ccc", it is configured as optional-value,
- if user only input "--ccc" then the ``ccc_val`` will be set to ``val_set`` field
+- For argument "--ccc", it is configured as optional-value.
+ If user only input "--ccc", then the ``ccc_val`` will be set to ``val_set`` field
which is 200 in the above example;
- if user input "--ccc=123", then the ``ccc_val`` will be set to 123.
+ If user input "--ccc=123", then the ``ccc_val`` will be set to 123.
- For argument "ooo", it is positional argument,
the ``ooo_val`` will be set to user input's value.
-Parsing by callback way
-~~~~~~~~~~~~~~~~~~~~~~~
+Parsing by Callback Method
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
-It could also choose to use callback to parse,
-just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+You may choose to use the callback method to parse.
+To do so, define a unique index for the argument
+and make the ``val_save`` field to be NULL as a zero value-type.
-In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this way.
+In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this method.
-Multiple times argument
+Multiple Times Argument
~~~~~~~~~~~~~~~~~~~~~~~
-If want to support the ability to enter the same argument multiple times,
-then should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
+If you want to support the ability to enter the same argument multiple times,
+then you should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
For example:
.. code-block:: C
@@ -182,5 +183,5 @@ Then the user input could contain multiple "--xyz" arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 3/9] doc: reword design section in contributors guidelines
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:47 ` [PATCH] doc/design: minor cleanus Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
` (6 subsequent siblings)
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor editing was made for grammar and syntax of design section.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
.mailmap | 1 +
doc/guides/contributing/design.rst | 86 +++++++++++++++---------------
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
3 files changed, 45 insertions(+), 44 deletions(-)
diff --git a/.mailmap b/.mailmap
index 66ebc20666..7d4929c5d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1002,6 +1002,7 @@ Naga Suresh Somarowthu <naga.sureshx.somarowthu@intel.com>
Nalla Pradeep <pnalla@marvell.com>
Na Na <nana.nn@alibaba-inc.com>
Nan Chen <whutchennan@gmail.com>
+Nandini Persad <nandinipersad361@gmail.com>
Nannan Lu <nannan.lu@intel.com>
Nan Zhou <zhounan14@huawei.com>
Narcisa Vasile <navasile@linux.microsoft.com> <navasile@microsoft.com> <narcisa.vasile@microsoft.com>
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index b724177ba1..3d1f5aeb91 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -1,6 +1,7 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2018 The DPDK contributors
+
Design
======
@@ -8,22 +9,26 @@ Design
Environment or Architecture-specific Sources
--------------------------------------------
-In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, x86_64) or environment-specific (FreeBsd or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+
+By convention, a file is specific if the directory is indicated. Otherwise, it is common.
-By convention, a file is common if it is not located in a directory indicating that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to this architecture.
+For example:
+
+A file located in a subdir of "x86_64" directory is specific to this architecture.
A file located in a subdir of "linux" is specific to this execution environment.
.. note::
Code in DPDK libraries and applications should be generic.
- The correct location for architecture or executive environment specific code is in the EAL.
+ The correct location for architecture or executive environment-specific code is in the EAL.
+
+When necessary, there are several ways to handle specific code:
-When absolutely necessary, there are several ways to handle specific code:
-* Use a ``#ifdef`` with a build definition macro in the C code.
- This can be done when the differences are small and they can be embedded in the same C file:
+* When the differences are small and they can be embedded in the same C file, use a ``#ifdef`` with a build definition macro in the C code.
+
.. code-block:: c
@@ -33,9 +38,9 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
- In this case, the code is split into two separate files that are architecture or environment specific.
- This should only apply inside the EAL library.
+
+* When the differences are more significant, use build definition macros and conditions in the Meson build file. In this case, the code is split into two separate files that are architecture or environment specific. This should only apply inside the EAL library.
+
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -43,7 +48,8 @@ Per Architecture Sources
The following macro options can be used:
* ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when building for these architectures.
+
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -51,30 +57,22 @@ Per Execution Environment Sources
The following macro options can be used:
* ``RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only when building for this execution environment.
+
Mbuf features
-------------
-The ``rte_mbuf`` structure must be kept small (128 bytes).
-
-In order to add new features without wasting buffer space for unused features,
-some fields and flags can be registered dynamically in a shared area.
-The "dynamic" mbuf area is the default choice for the new features.
+A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice for accommodating new features. The "dynamic" area consumes the remaining space in the mbuf, indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept small (128 bytes).
-The "dynamic" area is eating the remaining space in mbuf,
-and some existing "static" fields may need to become "dynamic".
-
-Adding a new static field or flag must be an exception matching many criteria
-like (non exhaustive): wide usage, performance, size.
+As more features are added, the space for existinG=g "static" fields (fields that are allocated statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new static field or flag should be an exception. It must meet specific criteria including widespread usage, performance impact, and size considerations. Before adding a new static feature, it must be justified by its necessity and its impact on the system's efficiency.
Runtime Information - Logging, Tracing and Telemetry
----------------------------------------------------
-It is often desirable to provide information to the end-user
-as to what is happening to the application at runtime.
-DPDK provides a number of built-in mechanisms to provide this introspection:
+The end user may inquire as to what is happening to the application at runtime.
+DPDK provides several built-in mechanisms to provide these insights:
* :ref:`Logging <dynamic_logging>`
* :doc:`Tracing <../prog_guide/trace_lib>`
@@ -82,11 +80,11 @@ DPDK provides a number of built-in mechanisms to provide this introspection:
Each of these has its own strengths and suitabilities for use within DPDK components.
-Below are some guidelines for when each should be used:
+Here are guidelines for when each mechanism should be used:
* For reporting error conditions, or other abnormal runtime issues, *logging* should be used.
- Depending on the severity of the issue, the appropriate log level, for example,
- ``ERROR``, ``WARNING`` or ``NOTICE``, should be used.
+ For example, depending on the severity of the issue, the appropriate log level,
+ ``ERROR``, ``WARNING`` or ``NOTICE`` should be used.
.. note::
@@ -96,24 +94,24 @@ Below are some guidelines for when each should be used:
* For component initialization, or other cases where a path through the code
is only likely to be taken once,
- either *logging* at ``DEBUG`` level or *tracing* may be used, or potentially both.
+ either *logging* at ``DEBUG`` level or *tracing* may be used, or both.
In the latter case, tracing can provide basic information as to the code path taken,
with debug-level logging providing additional details on internal state,
- not possible to emit via tracing.
+ which is not possible to emit via tracing.
* For a component's data-path, where a path is to be taken multiple times within a short timeframe,
*tracing* should be used.
Since DPDK tracing uses `Common Trace Format <https://diamon.org/ctf/>`_ for its tracing logs,
post-analysis can be done using a range of external tools.
-* For numerical or statistical data generated by a component, for example, per-packet statistics,
+* For numerical or statistical data generated by a component, such as per-packet statistics,
*telemetry* should be used.
-* For any data where the data may need to be gathered at any point in the execution
- to help assess the state of the application component,
- for example, core configuration, device information, *telemetry* should be used.
+* For any data that may need to be gathered at any point during the execution
+ to help assess the state of the application component (for example, core configuration, device information) *telemetry* should be used.
Telemetry callbacks should not modify any program state, but be "read-only".
+
Many libraries also include a ``rte_<libname>_dump()`` function as part of their API,
writing verbose internal details to a given file-handle.
New libraries are encouraged to provide such functions where it makes sense to do so,
@@ -135,13 +133,12 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Having runtime support for enabling/disabling library statistics is recommended,
-as build-time options should be avoided. However, if build-time options are used,
-for example as in the table library, the options can be set using c_args.
-When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended
+as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
-are collected for any instance of any object type provided by the library:
+are collected for any instance of any object type provided by the library.
Prevention of ABI changes due to library statistics support
@@ -165,8 +162,8 @@ Motivation to allow the application to turn library statistics on and off
It is highly recommended that each library provides statistics counters to allow
an application to monitor the library-level run-time events. Typical counters
-are: number of packets received/dropped/transmitted, number of buffers
-allocated/freed, number of occurrences for specific events, etc.
+are: the number of packets received/dropped/transmitted, the number of buffers
+allocated/freed, the number of occurrences for specific events, etc.
However, the resources consumed for library-level statistics counter collection
have to be spent out of the application budget and the counters collected by
@@ -198,6 +195,7 @@ applications:
the application may decide to turn the collection of statistics counters off for
Library X and on for Library Y.
+
The statistics collection consumes a certain amount of CPU resources (cycles,
cache bandwidth, memory bandwidth, etc) that depends on:
@@ -218,6 +216,7 @@ cache bandwidth, memory bandwidth, etc) that depends on:
validated for header integrity, counting the number of bits set in a bitmask
might be needed.
+
PF and VF Considerations
------------------------
@@ -229,5 +228,6 @@ Developers should work with the Linux Kernel community to get the required
functionality upstream. PF functionality should only be added to DPDK for
testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
-upstreamable then a case can be made to maintain the PF functionality in DPDK
+upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
+
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 13be715933..0569c5cae6 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -99,7 +99,7 @@ e.g. :doc:`../nics/index`
Running DPDK Applications
-------------------------
-To run a DPDK application, some customization may be required on the target machine.
+To run a DPDK application, customization may be required on the target machine.
System Software
~~~~~~~~~~~~~~~
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH] doc/design: minor cleanus
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
@ 2024-06-22 14:47 ` Stephen Hemminger
2024-06-24 15:07 ` Thomas Monjalon
0 siblings, 1 reply; 30+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:47 UTC (permalink / raw)
To: nandinipersad361; +Cc: dev, Stephen Hemminger
Minor fixes to previous edit:
1. remove blank line at end of file, causes git complaint
2. fix minor typo (UTF-8?)
3. break long lines, although rst doesn't care it is nicer
for future editors to keep to 100 characters or less.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
Depends-on: patch-141466 ("doc: reword design section in contributors guideline")
doc/guides/contributing/design.rst | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index 3d1f5aeb91..77c4d3d823 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -63,9 +63,16 @@ The following macro options can be used:
Mbuf features
-------------
-A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice for accommodating new features. The "dynamic" area consumes the remaining space in the mbuf, indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept small (128 bytes).
+A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice
+for accommodating new features. The "dynamic" area consumes the remaining space in the mbuf,
+indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept
+small (128 bytes).
-As more features are added, the space for existinG=g "static" fields (fields that are allocated statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new static field or flag should be an exception. It must meet specific criteria including widespread usage, performance impact, and size considerations. Before adding a new static feature, it must be justified by its necessity and its impact on the system's efficiency.
+As more features are added, the space for existing "static" fields (fields that are allocated
+statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new
+static field or flag should be an exception. It must meet specific criteria including widespread
+usage, performance impact, and size considerations. Before adding a new static feature, it must be
+justified by its necessity and its impact on the system's efficiency.
Runtime Information - Logging, Tracing and Telemetry
@@ -134,7 +141,8 @@ Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Having runtime support for enabling/disabling library statistics is recommended
-as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+as build-time options should be avoided. However, if build-time options are used,
+as in the table library, the options can be set using c_args.
When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
@@ -230,4 +238,3 @@ testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
-
--
2.43.0
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH] doc/design: minor cleanus
2024-06-22 14:47 ` [PATCH] doc/design: minor cleanus Stephen Hemminger
@ 2024-06-24 15:07 ` Thomas Monjalon
0 siblings, 0 replies; 30+ messages in thread
From: Thomas Monjalon @ 2024-06-24 15:07 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: nandinipersad361, dev
22/06/2024 16:47, Stephen Hemminger:
> Minor fixes to previous edit:
> 1. remove blank line at end of file, causes git complaint
> 2. fix minor typo (UTF-8?)
> 3. break long lines, although rst doesn't care it is nicer
> for future editors to keep to 100 characters or less.
While changing lines, please split logically after dots, commas, etc,
so each line talks about something different.
It is easier to read/review, and it makes future changes even easier to review.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 4/9] doc: reword service cores section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
` (5 subsequent siblings)
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
I've made minor syntax changes to section 8 of programmer's guide, service cores.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/service_cores.rst | 32 ++++++++++++-------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index d4e6c3d6e6..59da3964bf 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,38 +4,38 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control how the service
cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
-between service cores and services can be configured to abstract away the
+between service cores and services can be configured to simplify the
difference between platforms and environments.
-For example, the Eventdev has hardware and software PMDs. Of these the software
+For example, the Eventdev has hardware and software PMDs. Of these the software,
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: by
using the service coremask, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-s` coremask argument to EAL, which will
-take any cores available in the main DPDK coremask, and if the bits are also set
-in the service coremask the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-s` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 5/9] doc: reword trace library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (2 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:54 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
` (4 subsequent siblings)
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor syntax edits were made to sect the trace library section of prog guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/trace_lib.rst | 50 ++++++++++++++---------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..e2983017d8 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 5/9] doc: reword trace library section in prog guide
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
@ 2024-06-22 14:54 ` Stephen Hemminger
0 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:54 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:50 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor syntax edits were made to sect the trace library section of prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
> doc/guides/prog_guide/trace_lib.rst | 50 ++++++++++++++---------------
> 1 file changed, 25 insertions(+), 25 deletions(-)
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 6/9] doc: reword log library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (3 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
` (3 subsequent siblings)
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 24 +++++++++++-----------
doc/guides/prog_guide/log_lib.rst | 32 ++++++++++++++---------------
doc/guides/prog_guide/trace_lib.rst | 22 ++++++++++----------
3 files changed, 39 insertions(+), 39 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@ Command-line Library
====================
Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,7 +18,7 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -5,7 +5,7 @@ Log Library
===========
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
-By default, in a Linux application, logs are sent to syslog and also to the console.
+By default, in a Linux application, logs are sent to syslog and the console.
On FreeBSD and Windows applications, logs are sent only to the console.
However, the log function can be overridden by the user to use a different logging mechanism.
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, you can achieve the same result using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
-As well as the log message, ``rte_log()`` takes two additional parameters:
+To output log messages, the ``rte_log()`` API function should be used,
+as well as the log message, ``rte_log()`` which takes two additional parameters:
* The log level
* The log component type
@@ -73,16 +73,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -97,10 +97,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e2983017d8..4177f8ba15 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. Here's an example of how you grep the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. Below is an example of counting the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -210,14 +210,14 @@ Use the tracecompass GUI tool
``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
a graphical view of events. Like ``babeltrace``, tracecompass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``tracecompass``, the following are the minimum required steps:
- Install ``tracecompass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
- Launch ``tracecompass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
For more details, refer
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
@@ -225,7 +225,7 @@ For more details, refer
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,8 +238,8 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
+As the DPDK trace library is designed to generate traces that use the ``Common Trace
+Format (CTF)``. ``CTF`` specification and consists of the following units to create
a trace.
- ``Stream`` Sequence of packets.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -277,7 +277,7 @@ per thread to enable lock less trace-emit function.
For non lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
+For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known of DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits. It consists of 48 bits of timestamp and 16 bits
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 6/9] doc: reword log library section in prog guide
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
@ 2024-06-22 14:55 ` Stephen Hemminger
0 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:55 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:51 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor changes made for syntax in the log library section and 7.1
> section of the programmer's guide. A couple sentences at the end of the
> trace library section were also edited.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 7/9] doc: reword cmdline section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (4 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
` (2 subsequent siblings)
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor syntax edits made to the cmdline section.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 34 +++++++++++++++----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index 6b10ab6c99..8aa1ef180b 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -62,7 +62,7 @@ The format of the list file must follow these requirements:
* One command per line
-* Variable fields are prefixed by the type-name in angle-brackets, for example:
+* Variable fields are prefixed by the type-name in angle-brackets. For example:
* ``<STRING>message``
@@ -75,7 +75,7 @@ The format of the list file must follow these requirements:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than by the type name.
+ have the comma-separated option list placed in braces rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -112,7 +112,7 @@ The generated content includes:
* A command-line context array definition, suitable for passing to ``cmdline_new``
-If so desired, the script can also output function stubs for the callback functions for each command.
+If needed, the script can also output function stubs for the callback functions for each command.
This behaviour is triggered by passing the ``--stubs`` flag to the script.
In this case, an output file must be provided with a filename ending in ".h",
and the callback stubs will be written to an equivalent ".c" file.
@@ -120,7 +120,7 @@ and the callback stubs will be written to an equivalent ".c" file.
.. note::
The stubs are written to a separate file,
- to allow continuous use of the script to regenerate the command-line header,
+ to allow continuous use of the script to regenerate the command-line header
without overwriting any code the user has added to the callback functions.
This makes it easy to incrementally add new commands to an existing application.
@@ -154,7 +154,7 @@ the callback functions would be:
These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
-The same "cmdname" stem is used in the naming of the generated structures too.
+The same "cmdname" stem is used in the naming of the generated structures as well.
To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -176,13 +176,12 @@ Integrating with the Application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To integrate the script output with the application,
-we must ``#include`` the generated header into our applications C file,
+we must ``#include`` the generated header into our application's C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
``ctx`` by default (Modifiable via script parameter).
-The callback functions may be in this same file, or in a separate one -
-they just need to be available to the linker at build-time.
+The callback functions may be in the same or separate file, as long as they are available to the linker at build-time.
Limitations of the Script Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -242,19 +241,19 @@ The resulting struct looks like:
As before, we choose names to match the tokens in the command.
Since our numeric parameter is a 16-bit value, we use ``uint16_t`` type for it.
-Any of the standard sized integer types can be used as parameters, depending on the desired result.
+Any of the standard-sized integer types can be used as parameters depending on the desired result.
Beyond the standard integer types,
-the library also allows variable parameters to be of a number of other types,
+the library also allows variable parameters to be of a number of other types
as called out in the feature list above.
* For variable string parameters,
the type should be ``cmdline_fixed_string_t`` - the same as for fixed tokens,
but these will be initialized differently (as described below).
-* For ethernet addresses use type ``struct rte_ether_addr``
+* For ethernet addresses, use type ``struct rte_ether_addr``
-* For IP addresses use type ``cmdline_ipaddr_t``
+* For IP addresses, use type ``cmdline_ipaddr_t``
Providing Field Initializers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -267,6 +266,7 @@ For fixed string tokens, like "quit", "show" and "port", the initializer will be
static cmdline_parse_token_string_t cmd_quit_quit_tok =
TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "quit");
+
The convention for naming used here is to include the base name of the overall result structure -
``cmd_quit`` in this case,
as well as the name of the field within that structure - ``quit`` in this case, followed by ``_tok``.
@@ -311,8 +311,8 @@ The callback function should have type:
where the first parameter is a pointer to the result structure defined above,
the second parameter is the command-line instance,
and the final parameter is a user-defined pointer provided when we associate the callback with the command.
-Most callback functions only use the first parameter, or none at all,
-but the additional two parameters provide some extra flexibility,
+Most callback functions only use the first parameter or none at all,
+but the additional two parameters provide some extra flexibility
to allow the callback to work with non-global state in your application.
For our two example commands, the relevant callback functions would look very similar in definition.
@@ -341,7 +341,7 @@ Associating Callback and Command
The ``cmdline_parse_inst_t`` type defines a "parse instance",
i.e. a sequence of tokens to be matched and then an associated function to be called.
-Also included in the instance type are a field for help text for the command,
+Also included in the instance type are a field for help text for the command
and any additional user-defined parameter to be passed to the callback functions referenced above.
For example, for our simple "quit" command:
@@ -362,8 +362,8 @@ then set the user-defined parameter to NULL,
provide a help message to be given, on request, to the user explaining the command,
before finally listing out the single token to be matched for this command instance.
-For our second, port stats, example,
-as well as making things a little more complicated by having multiple tokens to be matched,
+For our second "port stats" example,
+as well as making things more complex by having multiple tokens to be matched,
we can also demonstrate passing in a parameter to the function.
Let us suppose that our application does not always use all the ports available to it,
but instead only uses a subset of the ports, stored in an array called ``active_ports``.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 8/9] doc: reword stack library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (5 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
2024-06-22 14:52 ` [PATCH v2 1/9] doc: reword pmd " Stephen Hemminger
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor changes made to wording of the stack library section in prog guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/stack_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..a51df60d13 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -44,8 +44,8 @@ Lock-free Stack
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH v2 9/9] doc: reword rcu library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (6 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2024-06-22 14:52 ` [PATCH v2 1/9] doc: reword pmd " Stephen Hemminger
8 siblings, 1 reply; 30+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Simple syntax changes made to the rcu library section in programmer's guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/rcu_lib.rst | 77 ++++++++++++++++---------------
1 file changed, 40 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index d0aef3bc16..c7ae349184 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -8,17 +8,17 @@ RCU Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory. An example of this is an index of a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -64,19 +64,19 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical sections for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
@@ -119,14 +119,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -134,13 +134,13 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
+calls (for example, while using eventdev APIs). The writer thread should not
wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
+call ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
@@ -149,13 +149,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -173,7 +173,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -181,9 +181,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -203,40 +203,43 @@ the application. When a writer deletes an entry from a data structure, the write
There are several APIs provided to help with this process. The writer
can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing.
+It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms.
+
+The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
#. Initialization
#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Reclaiming Resources*
#. Shutdown
The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
+The application must handle 'Initialization' and 'Quiescent State Reporting'. Therefore, the application:
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
+* Must create the RCU variable and register the reader threads to report their quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-The client library will handle 'Reclaiming Resources' part of the process. The
+The client library will handle the 'Reclaiming Resources' part of the process. The
client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
+reclamation algorithm. So, the client library should:
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
+* Provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
The 'Shutdown' process needs to be shared between the application and the
-client library.
+client library. Note that:
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
+* The application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+* The client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.34.1
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH v2 1/9] doc: reword pmd section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (7 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
@ 2024-06-22 14:52 ` Stephen Hemminger
8 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:52 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:46 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> I made edits for syntax/grammar the PMD section of the prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 30+ messages in thread