DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: dev@dpdk.org
Cc: Stephen Hemminger <stephen@networkplumber.org>
Subject: [PATCH v2] doc/multi-process: fix grammar and phrasing
Date: Fri,  4 Oct 2024 15:10:41 -0700	[thread overview]
Message-ID: <20241004221041.67439-1-stephen@networkplumber.org> (raw)
In-Reply-To: <20220601095719.1168-1-kai.ji@intel.com>

Simplify awkward wording in description of the multi process
application.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 doc/guides/sample_app_ug/multi_process.rst | 168 ++++++++-------------
 1 file changed, 61 insertions(+), 107 deletions(-)

diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index c53331def3..ae66015ae8 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -6,14 +6,14 @@
 Multi-process Sample Application
 ================================
 
-This chapter describes the example applications for multi-processing that are included in the DPDK.
+This chapter describes example multi-processing applications that are included in the DPDK.
 
 Example Applications
 --------------------
 
 Building the Sample Applications
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The multi-process example applications are built in the same way as other sample applications,
+The multi-process example applications are built the same way as other sample applications,
 and as documented in the *DPDK Getting Started Guide*.
 
 
@@ -23,21 +23,20 @@ The applications are located in the ``multi_process`` sub-directory.
 
 .. note::
 
-    If just a specific multi-process application needs to be built,
-    the final make command can be run just in that application's directory,
-    rather than at the top-level multi-process directory.
+    If only a specific multi-process application needs to be built,
+    the final make command can be run just in that application's directory.
 
 Basic Multi-process Example
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The examples/simple_mp folder in the DPDK release contains a basic example application to demonstrate how
-two DPDK processes can work together using queues and memory pools to share information.
+The examples/simple_mp folder contains a basic example application that demonstrates how
+two DPDK processes can work together to share information using queues and memory pools.
 
 Running the Application
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-To run the application, start one copy of the simple_mp binary in one terminal,
-passing at least two cores in the coremask/corelist, as follows:
+To run the application, start simple_mp binary in one terminal,
+passing at least two cores in the coremask/corelist:
 
 .. code-block:: console
 
@@ -79,12 +78,11 @@ again run the same binary setting at least two cores in the coremask/corelist:
 
     ./<build_dir>/examples/dpdk-simple_mp -l 2-3 -n 4 --proc-type=secondary
 
-When running a secondary process such as that shown above, the proc-type parameter can again be specified as auto.
-However, omitting the parameter altogether will cause the process to try and start as a primary rather than secondary process.
+When running a secondary process such as above, the proc-type parameter can be specified as auto.
+Omitting the parameter will cause the process to try and start as a primary rather than secondary process.
 
-Once the process type is specified correctly,
-the process starts up, displaying largely similar status messages to the primary instance as it initializes.
-Once again, you will be presented with a command prompt.
+The process starts up, displaying similar status messages to the primary instance as it initializes
+then prints a command prompt.
 
 Once both processes are running, messages can be sent between them using the send command.
 At any stage, either process can be terminated using the quit command.
@@ -108,10 +106,8 @@ At any stage, either process can be terminated using the quit command.
 How the Application Works
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-The core of this example application is based on using two queues and a single memory pool in shared memory.
-These three objects are created at startup by the primary process,
-since the secondary process cannot create objects in memory as it cannot reserve memory zones,
-and the secondary process then uses lookup functions to attach to these objects as it starts up.
+This application uses two queues and a single memory pool created in the primary process..
+The secondary process then uses lookup functions to attach to these objects.
 
 .. literalinclude:: ../../../examples/multi_process/simple_mp/main.c
         :language: c
@@ -121,23 +117,20 @@ and the secondary process then uses lookup functions to attach to these objects
 
 Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
 
-Once the rings and memory pools are all available in both the primary and secondary processes,
-the application simply dedicates two threads to sending and receiving messages respectively.
-The receive thread simply dequeues any messages on the receive ring, prints them,
-and frees the buffer space used by the messages back to the memory pool.
-The send thread makes use of the command-prompt library to interactively request user input for messages to send.
-Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents,
-then enqueued on the appropriate rte_ring.
+The application has two threads:
+
+sender
+   Reads from stdin, converts them to messages, and enqueues them to the ring.
+
+receiver
+   Dequeues any messages on the ring, prints them, then frees the buffer.
+
 
 Symmetric Multi-process Example
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel,
+The symmetric multi process example demonstrates how a set of processes can run in parallel,
 with each process performing the same set of packet- processing operations.
-(Since each process is identical in functionality to the others,
-we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing -
-such as a client-server mode of operation seen in the next example,
-where different processes perform different tasks, yet co-operate to form a packet-processing system.)
 The following diagram shows the data-flow through the application, using two processes.
 
 .. _figure_sym_multi_proc_app:
@@ -147,33 +140,27 @@ The following diagram shows the data-flow through the application, using two pro
    Example Data Flow in a Symmetric Multi-process Application
 
 
-As the diagram shows, each process reads packets from each of the network ports in use.
-RSS is used to distribute incoming packets on each port to different hardware RX queues.
+Each process reads packets from each of the network ports in use.
+RSS distributes incoming packets on each port to different hardware RX queues.
 Each process reads a different RX queue on each port and so does not contend with any other process for that queue access.
-Similarly, each process writes outgoing packets to a different TX queue on each port.
+Each process writes outgoing packets to a different TX queue on each port.
 
 Running the Application
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance,
-though with a number of other application- specific parameters also provided after the EAL arguments.
-These additional parameters are:
+The first instance of the symmetric_mp process is the primary instance, with the EAL arguments:
 
-*   -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
+*   -p <portmask>, the portmask is a hexadecimal bitmask of what ports on the system are to be used.
     For example: -p 3 to use ports 0 and 1 only.
 
-*   --num-procs <N>, where N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing.
+*   --num-procs <N>, the N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing.
     This parameter is used to configure the appropriate number of receive queues on each network port.
 
 *   --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes, specified above).
     This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port.
 
-The secondary symmetric_mp instances must also have these parameters specified,
-and the first two must be the same as those passed to the primary instance, or errors result.
-
-For example, to run a set of four symmetric_mp instances, running on lcores 1-4,
-all performing level-2 forwarding of packets between ports 0 and 1,
-the following commands can be used (assuming run as root):
+The secondary instance must be started same parameters must be started with the similar EAL parameters.
+Example:
 
 .. code-block:: console
 
@@ -184,31 +171,13 @@ the following commands can be used (assuming run as root):
 
 .. note::
 
-    In the above example, the process type can be explicitly specified as primary or secondary, rather than auto.
-    When using auto, the first process run creates all the memory structures needed for all processes -
-    irrespective of whether it has a proc-id of 0, 1, 2 or 3.
+    In the above example, auto is used so the first instance becomes the primary process.
 
-.. note::
-
-    For the symmetric multi-process example, since all processes work in the same manner,
-    once the hugepage shared memory and the network ports are initialized,
-    it is not necessary to restart all processes if the primary instance dies.
-    Instead, that process can be restarted as a secondary,
-    by explicitly setting the proc-type to secondary on the command line.
-    (All subsequent instances launched will also need this explicitly specified,
-    as auto-detection will detect no primary processes running and therefore attempt to re-initialize shared memory.)
 
 How the Application Works
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-The initialization calls in both the primary and secondary instances are the same for the most part,
-calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices.
-Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance.
-
-In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized -
-the number of RX and TX queues per port being determined by the num-procs parameter passed on the command-line.
-The structures for the initialized network ports are stored in shared memory and
-therefore will be accessible by the secondary process as it initializes.
+The primary instance creates the memory pool and initializes the network ports.
 
 .. literalinclude:: ../../../examples/multi_process/symmetric_mp/main.c
         :language: c
@@ -216,27 +185,27 @@ therefore will be accessible by the secondary process as it initializes.
         :end-before: >8 End of primary instance initialization.
         :dedent: 1
 
-In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used,
-giving the secondary process access to the hardware and software rings for each network port.
-Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name:
+The secondary instance gets the port information and exported by the primary process.
+The memory pool is accessed by doing a lookup for it by name:
 
 .. code-block:: c
 
-    mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... )
+    if (proc_type == RTE_PROC_SECONDARY)
+       mp = rte_mempool_lookup(_SMP_MBUF_POOL);
+    else
+       mp = rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... )
 
-Once this initialization is complete, the main loop of each process, both primary and secondary,
-is exactly the same - each process reads from each port using the queue corresponding to its proc-id parameter,
+The main loop of each process, both primary and secondary, is the same.
+Each process reads from each port using the queue corresponding to its proc-id parameter,
 and writes to the corresponding transmit queue on the output port.
 
 Client-Server Multi-process Example
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-The third example multi-process application included with the DPDK shows how one can
-use a client-server type multi-process design to do packet processing.
-In this example, a single server process performs the packet reception from the ports being used and
-distributes these packets using round-robin ordering among a set of client  processes,
-which perform the actual packet processing.
-In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port.
+The example multi-process application demonstrates a client-server type multi-process design.
+A single server process receives a set of packets from the ports and distributes these packets using round-robin
+ordering to the client processes,
+Each client processes packets and does level-2 forwarding by sending each packet out on a different network port.
 
 The following diagram shows the data-flow through the application, using two client processes.
 
@@ -250,7 +219,7 @@ The following diagram shows the data-flow through the application, using two cli
 Running the Application
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-The server process must be run initially as the primary process to set up all memory structures for use by the clients.
+The server process must be run as the primary process to set up all memory structures.
 In addition to the EAL parameters, the application- specific parameters are:
 
 *   -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
@@ -261,14 +230,14 @@ In addition to the EAL parameters, the application- specific parameters are:
 
 .. note::
 
-    In the server process, a single thread, the main thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O.
-    If a coremask/corelist is specified with more than a single lcore bit set in it,
-    an additional lcore will be used for a thread to periodically print packet count statistics.
+    In the server process, has a single thread using the lowest numbered lcore in the coremask/corelist, performs all packet I/O.
+    If coremask/corelist parameter specifies with more than a single lcore bit set,
+    an additional lcore will be used for a thread to print packet count periodically.
 
-Since the server application stores configuration data in shared memory, including the network ports to be used,
-the only application parameter needed by a client process is its client instance ID.
-Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4,
-the following commands could be used:
+The server application stores configuration data in shared memory, including the network ports used.
+The only application parameter needed by a client process is its client instance ID.
+To run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4,
+the commands are:
 
 .. code-block:: console
 
@@ -285,27 +254,12 @@ the following commands could be used:
 How the Application Works
 ^^^^^^^^^^^^^^^^^^^^^^^^^
 
-The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary.
-One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory.
-This eliminates the need for the client processes to have the portmask parameter passed into them on the command line,
-as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors.
-
-In the same way that the server process is designed to be run as a primary process instance only,
-the client processes are designed to be run as secondary instances only.
-They have no code to attempt to create shared memory objects.
-Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup().
-The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus,
-which will, as in the symmetric multi-process example,
-automatically get access to the network ports using the settings already configured by the primary/server process.
-
-Once all applications are initialized, the server operates by reading packets from each network port in turn and
-distributing those packets to the client queues (software rings, one for each client process) in round-robin order.
-On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port.
-The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa.
-Similarly, packets are routed between the 3rd and 4th network ports and so on.
-The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process.
+The server (primary) process performs the initialization of network port and data structure and
+stores its port configuration data in a memory zone in hugepage shared memory.
+The client process does not need the portmask parameter passed in via the command line.
+The server process is the primary process, and the client processes are secondary processes.
 
-In both the server and the client processes, outgoing packets are buffered before being sent,
-so as to allow the sending of multiple packets in a single burst to improve efficiency.
-For example, the client process will buffer packets to send,
-until either the buffer is full or until we receive no further packets from the server.
+The server operates by reading packets from each network port and distributing those packets to the client queues.
+The client reads from the ring and routes the packet to a different network port.
+The routing used is very simple: all packets received on the first NIC port are transmitted back out on the second port and vice versa.
+Similarly, packets are routed between the 3rd and 4th network ports and so on.
-- 
2.45.2


      parent reply	other threads:[~2024-10-04 22:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-01  9:57 [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing Kai Ji
2022-07-11 21:08 ` Thomas Monjalon
2024-10-04  0:04 ` Stephen Hemminger
2024-10-04 22:10 ` Stephen Hemminger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241004221041.67439-1-stephen@networkplumber.org \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).