* [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing
@ 2022-06-01 9:57 Kai Ji
2022-07-11 21:08 ` Thomas Monjalon
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Kai Ji @ 2022-06-01 9:57 UTC (permalink / raw)
To: Anatoly Burakov, Bernard Iremonger; +Cc: dev, Kai Ji
Update and rephrasing some sentences, small improvements
made to the multi-process sample application user guide
Fixes: d0dff9ba445e ("doc: sample application user guide")
Cc: bernard.iremonger@intel.com
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
doc/guides/sample_app_ug/multi_process.rst | 67 +++++++++++-----------
1 file changed, 33 insertions(+), 34 deletions(-)
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index c53331def3..e2a311a426 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2010-2014 Intel Corporation.
+ Copyright(c) 2010-2022 Intel Corporation.
.. _multi_process_app:
@@ -111,7 +111,7 @@ How the Application Works
The core of this example application is based on using two queues and a single memory pool in shared memory.
These three objects are created at startup by the primary process,
since the secondary process cannot create objects in memory as it cannot reserve memory zones,
-and the secondary process then uses lookup functions to attach to these objects as it starts up.
+thus the secondary process uses lookup functions to attach to these objects as it starts up.
.. literalinclude:: ../../../examples/multi_process/simple_mp/main.c
:language: c
@@ -119,25 +119,25 @@ and the secondary process then uses lookup functions to attach to these objects
:end-before: >8 End of ring structure.
:dedent: 1
-Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
+Note, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
Once the rings and memory pools are all available in both the primary and secondary processes,
the application simply dedicates two threads to sending and receiving messages respectively.
-The receive thread simply dequeues any messages on the receive ring, prints them,
-and frees the buffer space used by the messages back to the memory pool.
-The send thread makes use of the command-prompt library to interactively request user input for messages to send.
-Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents,
-then enqueued on the appropriate rte_ring.
+The receiver thread simply dequeues any messages on the receive ring and prints out in terminal,
+then the buffer space used by the messages is released back to the memory pool.
+The sender thread makes use of the command-prompt library to interactively request user input for messages to send.
+Once a send command is issued, the message contents are put into a buffer that was allocated from the memory pool,
+which is then enqueued on the appropriate rte_ring.
Symmetric Multi-process Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel,
-with each process performing the same set of packet- processing operations.
-(Since each process is identical in functionality to the others,
-we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing -
-such as a client-server mode of operation seen in the next example,
-where different processes perform different tasks, yet co-operate to form a packet-processing system.)
+The second DPDK multi-process example demonstrates how a set of processes can run in parallel,
+where each process is performing the same set of packet-processing operations.
+(As each process is identical in functionality to the others,
+we refer to this as symmetric multi-processing. In the asymmetric multi-processing example,
+the different client-server mode processes perform different tasks,
+yet co-operate to form a packet-processing system.)
The following diagram shows the data-flow through the application, using two processes.
.. _figure_sym_multi_proc_app:
@@ -155,9 +155,8 @@ Similarly, each process writes outgoing packets to a different TX queue on each
Running the Application
^^^^^^^^^^^^^^^^^^^^^^^
-As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance,
-though with a number of other application- specific parameters also provided after the EAL arguments.
-These additional parameters are:
+The first instance of the symmetric_mp process must be run as the primary instance,
+with the following application parameters:
* -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
For example: -p 3 to use ports 0 and 1 only.
@@ -169,7 +168,7 @@ These additional parameters are:
This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port.
The secondary symmetric_mp instances must also have these parameters specified,
-and the first two must be the same as those passed to the primary instance, or errors result.
+and the <portmask> and <N> parameters need to be configured with the same values as the primary instance.
For example, to run a set of four symmetric_mp instances, running on lcores 1-4,
all performing level-2 forwarding of packets between ports 0 and 1,
@@ -202,7 +201,7 @@ How the Application Works
^^^^^^^^^^^^^^^^^^^^^^^^^
The initialization calls in both the primary and secondary instances are the same for the most part,
-calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices.
+calling the rte_eal_init(), 1G and 10G driver initialization and then probing devices.
Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance.
In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized -
@@ -217,7 +216,7 @@ therefore will be accessible by the secondary process as it initializes.
:dedent: 1
In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used,
-giving the secondary process access to the hardware and software rings for each network port.
+giving the secondary process is able to access to the hardware and software rings for each network port.
Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name:
.. code-block:: c
@@ -234,7 +233,7 @@ Client-Server Multi-process Example
The third example multi-process application included with the DPDK shows how one can
use a client-server type multi-process design to do packet processing.
In this example, a single server process performs the packet reception from the ports being used and
-distributes these packets using round-robin ordering among a set of client processes,
+distributes these packets using round-robin ordering among a set of client processes,
which perform the actual packet processing.
In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port.
@@ -250,8 +249,8 @@ The following diagram shows the data-flow through the application, using two cli
Running the Application
^^^^^^^^^^^^^^^^^^^^^^^
-The server process must be run initially as the primary process to set up all memory structures for use by the clients.
-In addition to the EAL parameters, the application- specific parameters are:
+The server process must be run initially as the primary process to set up all memory structures for use by the client processes.
+In addition to the EAL parameters, the application-specific parameters are:
* -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
For example: -p 3 to use ports 0 and 1 only.
@@ -285,23 +284,23 @@ the following commands could be used:
How the Application Works
^^^^^^^^^^^^^^^^^^^^^^^^^
-The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary.
-One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory.
-This eliminates the need for the client processes to have the portmask parameter passed into them on the command line,
-as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors.
+The server process performs the network port and data structure initialization similar to the primary symmetric multi-process application.
+The server process stores port configuration data in a memory zone in hugepage shared memory, this eliminates
+the need for the client processes to have the same portmask parameter in the command line.
+This enhancement can be done for the symmetric multi-process application in the future.
In the same way that the server process is designed to be run as a primary process instance only,
the client processes are designed to be run as secondary instances only.
-They have no code to attempt to create shared memory objects.
-Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup().
-The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus,
-which will, as in the symmetric multi-process example,
-automatically get access to the network ports using the settings already configured by the primary/server process.
+The client process does not support creating shared memory objects.
+Instead, the client process can access required rings and memory pools via rte_ring_lookup() and rte_mempool_lookup() function calls.
+The available network ports use by the processes are obtained by loading the network port drivers and probing the PCI bus.
+Same as the implementation in the symmetric multi-process example, the client process automatically gets
+access to the network ports settings where configured by the primary/server process.
-Once all applications are initialized, the server operates by reading packets from each network port in turn and
+Once all applications are initialized, the server operates by reading packets from each network port in turns and
distributing those packets to the client queues (software rings, one for each client process) in round-robin order.
On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port.
-The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa.
+The routing used is very simple, all packets received on the first NIC port are transmitted back out on the second port and vice versa.
Similarly, packets are routed between the 3rd and 4th network ports and so on.
The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process.
--
2.17.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing
2022-06-01 9:57 [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing Kai Ji
@ 2022-07-11 21:08 ` Thomas Monjalon
2024-10-04 0:04 ` Stephen Hemminger
2024-10-04 22:10 ` [PATCH v2] doc/multi-process: fix grammar and phrasing Stephen Hemminger
2 siblings, 0 replies; 4+ messages in thread
From: Thomas Monjalon @ 2022-07-11 21:08 UTC (permalink / raw)
To: Kai Ji; +Cc: Anatoly Burakov, Bernard Iremonger, dev
Anyone to review?
Please could you go a step further and remove one useless header level,
fix links, enclose code with double backticks and other basic stuff?
Thanks
01/06/2022 11:57, Kai Ji:
> Update and rephrasing some sentences, small improvements
> made to the multi-process sample application user guide
>
> Fixes: d0dff9ba445e ("doc: sample application user guide")
> Cc: bernard.iremonger@intel.com
>
> Signed-off-by: Kai Ji <kai.ji@intel.com>
> ---
> doc/guides/sample_app_ug/multi_process.rst | 67 +++++++++++-----------
> 1 file changed, 33 insertions(+), 34 deletions(-)
>
> diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
> index c53331def3..e2a311a426 100644
> --- a/doc/guides/sample_app_ug/multi_process.rst
> +++ b/doc/guides/sample_app_ug/multi_process.rst
> @@ -1,5 +1,5 @@
> .. SPDX-License-Identifier: BSD-3-Clause
> - Copyright(c) 2010-2014 Intel Corporation.
> + Copyright(c) 2010-2022 Intel Corporation.
>
> .. _multi_process_app:
>
> @@ -111,7 +111,7 @@ How the Application Works
> The core of this example application is based on using two queues and a single memory pool in shared memory.
> These three objects are created at startup by the primary process,
> since the secondary process cannot create objects in memory as it cannot reserve memory zones,
> -and the secondary process then uses lookup functions to attach to these objects as it starts up.
> +thus the secondary process uses lookup functions to attach to these objects as it starts up.
>
> .. literalinclude:: ../../../examples/multi_process/simple_mp/main.c
> :language: c
> @@ -119,25 +119,25 @@ and the secondary process then uses lookup functions to attach to these objects
> :end-before: >8 End of ring structure.
> :dedent: 1
>
> -Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
> +Note, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
>
> Once the rings and memory pools are all available in both the primary and secondary processes,
> the application simply dedicates two threads to sending and receiving messages respectively.
> -The receive thread simply dequeues any messages on the receive ring, prints them,
> -and frees the buffer space used by the messages back to the memory pool.
> -The send thread makes use of the command-prompt library to interactively request user input for messages to send.
> -Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents,
> -then enqueued on the appropriate rte_ring.
> +The receiver thread simply dequeues any messages on the receive ring and prints out in terminal,
> +then the buffer space used by the messages is released back to the memory pool.
> +The sender thread makes use of the command-prompt library to interactively request user input for messages to send.
> +Once a send command is issued, the message contents are put into a buffer that was allocated from the memory pool,
> +which is then enqueued on the appropriate rte_ring.
>
> Symmetric Multi-process Example
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> -The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel,
> -with each process performing the same set of packet- processing operations.
> -(Since each process is identical in functionality to the others,
> -we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing -
> -such as a client-server mode of operation seen in the next example,
> -where different processes perform different tasks, yet co-operate to form a packet-processing system.)
> +The second DPDK multi-process example demonstrates how a set of processes can run in parallel,
> +where each process is performing the same set of packet-processing operations.
> +(As each process is identical in functionality to the others,
> +we refer to this as symmetric multi-processing. In the asymmetric multi-processing example,
> +the different client-server mode processes perform different tasks,
> +yet co-operate to form a packet-processing system.)
> The following diagram shows the data-flow through the application, using two processes.
>
> .. _figure_sym_multi_proc_app:
> @@ -155,9 +155,8 @@ Similarly, each process writes outgoing packets to a different TX queue on each
> Running the Application
> ^^^^^^^^^^^^^^^^^^^^^^^
>
> -As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance,
> -though with a number of other application- specific parameters also provided after the EAL arguments.
> -These additional parameters are:
> +The first instance of the symmetric_mp process must be run as the primary instance,
> +with the following application parameters:
>
> * -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
> For example: -p 3 to use ports 0 and 1 only.
> @@ -169,7 +168,7 @@ These additional parameters are:
> This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port.
>
> The secondary symmetric_mp instances must also have these parameters specified,
> -and the first two must be the same as those passed to the primary instance, or errors result.
> +and the <portmask> and <N> parameters need to be configured with the same values as the primary instance.
>
> For example, to run a set of four symmetric_mp instances, running on lcores 1-4,
> all performing level-2 forwarding of packets between ports 0 and 1,
> @@ -202,7 +201,7 @@ How the Application Works
> ^^^^^^^^^^^^^^^^^^^^^^^^^
>
> The initialization calls in both the primary and secondary instances are the same for the most part,
> -calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices.
> +calling the rte_eal_init(), 1G and 10G driver initialization and then probing devices.
> Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance.
>
> In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized -
> @@ -217,7 +216,7 @@ therefore will be accessible by the secondary process as it initializes.
> :dedent: 1
>
> In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used,
> -giving the secondary process access to the hardware and software rings for each network port.
> +giving the secondary process is able to access to the hardware and software rings for each network port.
> Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name:
>
> .. code-block:: c
> @@ -234,7 +233,7 @@ Client-Server Multi-process Example
> The third example multi-process application included with the DPDK shows how one can
> use a client-server type multi-process design to do packet processing.
> In this example, a single server process performs the packet reception from the ports being used and
> -distributes these packets using round-robin ordering among a set of client processes,
> +distributes these packets using round-robin ordering among a set of client processes,
> which perform the actual packet processing.
> In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port.
>
> @@ -250,8 +249,8 @@ The following diagram shows the data-flow through the application, using two cli
> Running the Application
> ^^^^^^^^^^^^^^^^^^^^^^^
>
> -The server process must be run initially as the primary process to set up all memory structures for use by the clients.
> -In addition to the EAL parameters, the application- specific parameters are:
> +The server process must be run initially as the primary process to set up all memory structures for use by the client processes.
> +In addition to the EAL parameters, the application-specific parameters are:
>
> * -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
> For example: -p 3 to use ports 0 and 1 only.
> @@ -285,23 +284,23 @@ the following commands could be used:
> How the Application Works
> ^^^^^^^^^^^^^^^^^^^^^^^^^
>
> -The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary.
> -One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory.
> -This eliminates the need for the client processes to have the portmask parameter passed into them on the command line,
> -as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors.
> +The server process performs the network port and data structure initialization similar to the primary symmetric multi-process application.
> +The server process stores port configuration data in a memory zone in hugepage shared memory, this eliminates
> +the need for the client processes to have the same portmask parameter in the command line.
> +This enhancement can be done for the symmetric multi-process application in the future.
>
> In the same way that the server process is designed to be run as a primary process instance only,
> the client processes are designed to be run as secondary instances only.
> -They have no code to attempt to create shared memory objects.
> -Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup().
> -The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus,
> -which will, as in the symmetric multi-process example,
> -automatically get access to the network ports using the settings already configured by the primary/server process.
> +The client process does not support creating shared memory objects.
> +Instead, the client process can access required rings and memory pools via rte_ring_lookup() and rte_mempool_lookup() function calls.
> +The available network ports use by the processes are obtained by loading the network port drivers and probing the PCI bus.
> +Same as the implementation in the symmetric multi-process example, the client process automatically gets
> +access to the network ports settings where configured by the primary/server process.
>
> -Once all applications are initialized, the server operates by reading packets from each network port in turn and
> +Once all applications are initialized, the server operates by reading packets from each network port in turns and
> distributing those packets to the client queues (software rings, one for each client process) in round-robin order.
> On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port.
> -The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa.
> +The routing used is very simple, all packets received on the first NIC port are transmitted back out on the second port and vice versa.
> Similarly, packets are routed between the 3rd and 4th network ports and so on.
> The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process.
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing
2022-06-01 9:57 [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing Kai Ji
2022-07-11 21:08 ` Thomas Monjalon
@ 2024-10-04 0:04 ` Stephen Hemminger
2024-10-04 22:10 ` [PATCH v2] doc/multi-process: fix grammar and phrasing Stephen Hemminger
2 siblings, 0 replies; 4+ messages in thread
From: Stephen Hemminger @ 2024-10-04 0:04 UTC (permalink / raw)
To: Kai Ji; +Cc: Anatoly Burakov, Bernard Iremonger, dev
On Wed, 1 Jun 2022 17:57:19 +0800
Kai Ji <kai.ji@intel.com> wrote:
> Update and rephrasing some sentences, small improvements
> made to the multi-process sample application user guide
>
> Fixes: d0dff9ba445e ("doc: sample application user guide")
> Cc: bernard.iremonger@intel.com
>
> Signed-off-by: Kai Ji <kai.ji@intel.com>
It's a start, but this whole document needs to be rewritten.
It is overly verbose and doesn't introduce multi-process well.
Giving an ack since it is better than what was there.
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2] doc/multi-process: fix grammar and phrasing
2022-06-01 9:57 [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing Kai Ji
2022-07-11 21:08 ` Thomas Monjalon
2024-10-04 0:04 ` Stephen Hemminger
@ 2024-10-04 22:10 ` Stephen Hemminger
2 siblings, 0 replies; 4+ messages in thread
From: Stephen Hemminger @ 2024-10-04 22:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Simplify awkward wording in description of the multi process
application.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/multi_process.rst | 168 ++++++++-------------
1 file changed, 61 insertions(+), 107 deletions(-)
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index c53331def3..ae66015ae8 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -6,14 +6,14 @@
Multi-process Sample Application
================================
-This chapter describes the example applications for multi-processing that are included in the DPDK.
+This chapter describes example multi-processing applications that are included in the DPDK.
Example Applications
--------------------
Building the Sample Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The multi-process example applications are built in the same way as other sample applications,
+The multi-process example applications are built the same way as other sample applications,
and as documented in the *DPDK Getting Started Guide*.
@@ -23,21 +23,20 @@ The applications are located in the ``multi_process`` sub-directory.
.. note::
- If just a specific multi-process application needs to be built,
- the final make command can be run just in that application's directory,
- rather than at the top-level multi-process directory.
+ If only a specific multi-process application needs to be built,
+ the final make command can be run just in that application's directory.
Basic Multi-process Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The examples/simple_mp folder in the DPDK release contains a basic example application to demonstrate how
-two DPDK processes can work together using queues and memory pools to share information.
+The examples/simple_mp folder contains a basic example application that demonstrates how
+two DPDK processes can work together to share information using queues and memory pools.
Running the Application
^^^^^^^^^^^^^^^^^^^^^^^
-To run the application, start one copy of the simple_mp binary in one terminal,
-passing at least two cores in the coremask/corelist, as follows:
+To run the application, start simple_mp binary in one terminal,
+passing at least two cores in the coremask/corelist:
.. code-block:: console
@@ -79,12 +78,11 @@ again run the same binary setting at least two cores in the coremask/corelist:
./<build_dir>/examples/dpdk-simple_mp -l 2-3 -n 4 --proc-type=secondary
-When running a secondary process such as that shown above, the proc-type parameter can again be specified as auto.
-However, omitting the parameter altogether will cause the process to try and start as a primary rather than secondary process.
+When running a secondary process such as above, the proc-type parameter can be specified as auto.
+Omitting the parameter will cause the process to try and start as a primary rather than secondary process.
-Once the process type is specified correctly,
-the process starts up, displaying largely similar status messages to the primary instance as it initializes.
-Once again, you will be presented with a command prompt.
+The process starts up, displaying similar status messages to the primary instance as it initializes
+then prints a command prompt.
Once both processes are running, messages can be sent between them using the send command.
At any stage, either process can be terminated using the quit command.
@@ -108,10 +106,8 @@ At any stage, either process can be terminated using the quit command.
How the Application Works
^^^^^^^^^^^^^^^^^^^^^^^^^
-The core of this example application is based on using two queues and a single memory pool in shared memory.
-These three objects are created at startup by the primary process,
-since the secondary process cannot create objects in memory as it cannot reserve memory zones,
-and the secondary process then uses lookup functions to attach to these objects as it starts up.
+This application uses two queues and a single memory pool created in the primary process..
+The secondary process then uses lookup functions to attach to these objects.
.. literalinclude:: ../../../examples/multi_process/simple_mp/main.c
:language: c
@@ -121,23 +117,20 @@ and the secondary process then uses lookup functions to attach to these objects
Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
-Once the rings and memory pools are all available in both the primary and secondary processes,
-the application simply dedicates two threads to sending and receiving messages respectively.
-The receive thread simply dequeues any messages on the receive ring, prints them,
-and frees the buffer space used by the messages back to the memory pool.
-The send thread makes use of the command-prompt library to interactively request user input for messages to send.
-Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents,
-then enqueued on the appropriate rte_ring.
+The application has two threads:
+
+sender
+ Reads from stdin, converts them to messages, and enqueues them to the ring.
+
+receiver
+ Dequeues any messages on the ring, prints them, then frees the buffer.
+
Symmetric Multi-process Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel,
+The symmetric multi process example demonstrates how a set of processes can run in parallel,
with each process performing the same set of packet- processing operations.
-(Since each process is identical in functionality to the others,
-we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing -
-such as a client-server mode of operation seen in the next example,
-where different processes perform different tasks, yet co-operate to form a packet-processing system.)
The following diagram shows the data-flow through the application, using two processes.
.. _figure_sym_multi_proc_app:
@@ -147,33 +140,27 @@ The following diagram shows the data-flow through the application, using two pro
Example Data Flow in a Symmetric Multi-process Application
-As the diagram shows, each process reads packets from each of the network ports in use.
-RSS is used to distribute incoming packets on each port to different hardware RX queues.
+Each process reads packets from each of the network ports in use.
+RSS distributes incoming packets on each port to different hardware RX queues.
Each process reads a different RX queue on each port and so does not contend with any other process for that queue access.
-Similarly, each process writes outgoing packets to a different TX queue on each port.
+Each process writes outgoing packets to a different TX queue on each port.
Running the Application
^^^^^^^^^^^^^^^^^^^^^^^
-As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance,
-though with a number of other application- specific parameters also provided after the EAL arguments.
-These additional parameters are:
+The first instance of the symmetric_mp process is the primary instance, with the EAL arguments:
-* -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
+* -p <portmask>, the portmask is a hexadecimal bitmask of what ports on the system are to be used.
For example: -p 3 to use ports 0 and 1 only.
-* --num-procs <N>, where N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing.
+* --num-procs <N>, the N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing.
This parameter is used to configure the appropriate number of receive queues on each network port.
* --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes, specified above).
This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port.
-The secondary symmetric_mp instances must also have these parameters specified,
-and the first two must be the same as those passed to the primary instance, or errors result.
-
-For example, to run a set of four symmetric_mp instances, running on lcores 1-4,
-all performing level-2 forwarding of packets between ports 0 and 1,
-the following commands can be used (assuming run as root):
+The secondary instance must be started same parameters must be started with the similar EAL parameters.
+Example:
.. code-block:: console
@@ -184,31 +171,13 @@ the following commands can be used (assuming run as root):
.. note::
- In the above example, the process type can be explicitly specified as primary or secondary, rather than auto.
- When using auto, the first process run creates all the memory structures needed for all processes -
- irrespective of whether it has a proc-id of 0, 1, 2 or 3.
+ In the above example, auto is used so the first instance becomes the primary process.
-.. note::
-
- For the symmetric multi-process example, since all processes work in the same manner,
- once the hugepage shared memory and the network ports are initialized,
- it is not necessary to restart all processes if the primary instance dies.
- Instead, that process can be restarted as a secondary,
- by explicitly setting the proc-type to secondary on the command line.
- (All subsequent instances launched will also need this explicitly specified,
- as auto-detection will detect no primary processes running and therefore attempt to re-initialize shared memory.)
How the Application Works
^^^^^^^^^^^^^^^^^^^^^^^^^
-The initialization calls in both the primary and secondary instances are the same for the most part,
-calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices.
-Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance.
-
-In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized -
-the number of RX and TX queues per port being determined by the num-procs parameter passed on the command-line.
-The structures for the initialized network ports are stored in shared memory and
-therefore will be accessible by the secondary process as it initializes.
+The primary instance creates the memory pool and initializes the network ports.
.. literalinclude:: ../../../examples/multi_process/symmetric_mp/main.c
:language: c
@@ -216,27 +185,27 @@ therefore will be accessible by the secondary process as it initializes.
:end-before: >8 End of primary instance initialization.
:dedent: 1
-In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used,
-giving the secondary process access to the hardware and software rings for each network port.
-Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name:
+The secondary instance gets the port information and exported by the primary process.
+The memory pool is accessed by doing a lookup for it by name:
.. code-block:: c
- mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... )
+ if (proc_type == RTE_PROC_SECONDARY)
+ mp = rte_mempool_lookup(_SMP_MBUF_POOL);
+ else
+ mp = rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... )
-Once this initialization is complete, the main loop of each process, both primary and secondary,
-is exactly the same - each process reads from each port using the queue corresponding to its proc-id parameter,
+The main loop of each process, both primary and secondary, is the same.
+Each process reads from each port using the queue corresponding to its proc-id parameter,
and writes to the corresponding transmit queue on the output port.
Client-Server Multi-process Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The third example multi-process application included with the DPDK shows how one can
-use a client-server type multi-process design to do packet processing.
-In this example, a single server process performs the packet reception from the ports being used and
-distributes these packets using round-robin ordering among a set of client processes,
-which perform the actual packet processing.
-In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port.
+The example multi-process application demonstrates a client-server type multi-process design.
+A single server process receives a set of packets from the ports and distributes these packets using round-robin
+ordering to the client processes,
+Each client processes packets and does level-2 forwarding by sending each packet out on a different network port.
The following diagram shows the data-flow through the application, using two client processes.
@@ -250,7 +219,7 @@ The following diagram shows the data-flow through the application, using two cli
Running the Application
^^^^^^^^^^^^^^^^^^^^^^^
-The server process must be run initially as the primary process to set up all memory structures for use by the clients.
+The server process must be run as the primary process to set up all memory structures.
In addition to the EAL parameters, the application- specific parameters are:
* -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
@@ -261,14 +230,14 @@ In addition to the EAL parameters, the application- specific parameters are:
.. note::
- In the server process, a single thread, the main thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O.
- If a coremask/corelist is specified with more than a single lcore bit set in it,
- an additional lcore will be used for a thread to periodically print packet count statistics.
+ In the server process, has a single thread using the lowest numbered lcore in the coremask/corelist, performs all packet I/O.
+ If coremask/corelist parameter specifies with more than a single lcore bit set,
+ an additional lcore will be used for a thread to print packet count periodically.
-Since the server application stores configuration data in shared memory, including the network ports to be used,
-the only application parameter needed by a client process is its client instance ID.
-Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4,
-the following commands could be used:
+The server application stores configuration data in shared memory, including the network ports used.
+The only application parameter needed by a client process is its client instance ID.
+To run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4,
+the commands are:
.. code-block:: console
@@ -285,27 +254,12 @@ the following commands could be used:
How the Application Works
^^^^^^^^^^^^^^^^^^^^^^^^^
-The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary.
-One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory.
-This eliminates the need for the client processes to have the portmask parameter passed into them on the command line,
-as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors.
-
-In the same way that the server process is designed to be run as a primary process instance only,
-the client processes are designed to be run as secondary instances only.
-They have no code to attempt to create shared memory objects.
-Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup().
-The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus,
-which will, as in the symmetric multi-process example,
-automatically get access to the network ports using the settings already configured by the primary/server process.
-
-Once all applications are initialized, the server operates by reading packets from each network port in turn and
-distributing those packets to the client queues (software rings, one for each client process) in round-robin order.
-On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port.
-The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa.
-Similarly, packets are routed between the 3rd and 4th network ports and so on.
-The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process.
+The server (primary) process performs the initialization of network port and data structure and
+stores its port configuration data in a memory zone in hugepage shared memory.
+The client process does not need the portmask parameter passed in via the command line.
+The server process is the primary process, and the client processes are secondary processes.
-In both the server and the client processes, outgoing packets are buffered before being sent,
-so as to allow the sending of multiple packets in a single burst to improve efficiency.
-For example, the client process will buffer packets to send,
-until either the buffer is full or until we receive no further packets from the server.
+The server operates by reading packets from each network port and distributing those packets to the client queues.
+The client reads from the ring and routes the packet to a different network port.
+The routing used is very simple: all packets received on the first NIC port are transmitted back out on the second port and vice versa.
+Similarly, packets are routed between the 3rd and 4th network ports and so on.
--
2.45.2
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-10-04 22:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-01 9:57 [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing Kai Ji
2022-07-11 21:08 ` Thomas Monjalon
2024-10-04 0:04 ` Stephen Hemminger
2024-10-04 22:10 ` [PATCH v2] doc/multi-process: fix grammar and phrasing Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).