From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABA60A0032; Mon, 11 Jul 2022 23:08:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 823C040687; Mon, 11 Jul 2022 23:08:31 +0200 (CEST) Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com [64.147.123.20]) by mails.dpdk.org (Postfix) with ESMTP id 9E6BC40156 for ; Mon, 11 Jul 2022 23:08:29 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 556073200657; Mon, 11 Jul 2022 17:08:28 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 11 Jul 2022 17:08:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1657573707; x= 1657660107; bh=G1ckDq4tFsIyMpzuJ1PVk1WiZyIUQcOHhOqbLQQt6oU=; b=k o8Kb3LvWoLlvgZpZk67mN4PaOep6Lp3aOMC/q9v2X7d+2KzuflKVOcJO37jsUO1U WZoN0jq1fNAikzP5Ru+j5uJ5M/onOYjRJCA/2JsYBMDwPLczKVPx7l/JfPy1w0Vv SU5T7QfQ6Q3yk+G/dAjANpF4KKFqDszyM75U7RYwU9nmsPhOxW3LE1gzS9nSW7ED pb9eitv7Gel1qgG70g+XwthhIon+WVIj480mKqU6W0yyPpNpzIkz7pgtyVI3p4Fn npQHCwyj42K0W94UZK+YbPn3f9k8TTY+c5y4Q/WafvX92TuLFJOHQCadHoZqs79l IVq07/RmRc++1fJz+c+qQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1657573707; x= 1657660107; bh=G1ckDq4tFsIyMpzuJ1PVk1WiZyIUQcOHhOqbLQQt6oU=; b=X QQr6bjRH/MA8GDPVbEB8U4VFSNQzTIHKR2zT72W4Us0caFY+BwyppPalPdPlDmUx NYxJ/AIyNq4M0pjT2LBd1ezzMFB4gwrB/CAZ8DSu+cAlQXqV2MadyZJnDuxfGas3 4C5k7828v6GKcLjOC1n6MPzmhyakRZX7/w4ZTiryFgD5O16OkwYsb5ERBH2n9ck+ oNRdXF8LbKKy7I+3tnNYRDNIA0ucvWFBx/t3ltCZjRrKfmuGps/jb2JshyhDyUtl R+b6uBStGewy2dxqdJnhs9BmNh8itHcHdLrC3laT5DbDr5YYermNup/KOQ0LAJyS +RKuiDgw0Rf1b+G+P9y3g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudejfedgudehkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvfevufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhho mhgrshcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqne cuggftrfgrthhtvghrnheptdejieeifeehtdffgfdvleetueeffeehueejgfeuteeftddt ieekgfekudehtdfgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 11 Jul 2022 17:08:27 -0400 (EDT) From: Thomas Monjalon To: Kai Ji Cc: Anatoly Burakov , Bernard Iremonger , dev@dpdk.org Subject: Re: [dpdk-dev v1] doc/multi-process: fixed grammar and rephrasing Date: Mon, 11 Jul 2022 23:08:26 +0200 Message-ID: <3641749.SyXcmblsem@thomas> In-Reply-To: <20220601095719.1168-1-kai.ji@intel.com> References: <20220601095719.1168-1-kai.ji@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Anyone to review? Please could you go a step further and remove one useless header level, fix links, enclose code with double backticks and other basic stuff? Thanks 01/06/2022 11:57, Kai Ji: > Update and rephrasing some sentences, small improvements > made to the multi-process sample application user guide > > Fixes: d0dff9ba445e ("doc: sample application user guide") > Cc: bernard.iremonger@intel.com > > Signed-off-by: Kai Ji > --- > doc/guides/sample_app_ug/multi_process.rst | 67 +++++++++++----------- > 1 file changed, 33 insertions(+), 34 deletions(-) > > diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst > index c53331def3..e2a311a426 100644 > --- a/doc/guides/sample_app_ug/multi_process.rst > +++ b/doc/guides/sample_app_ug/multi_process.rst > @@ -1,5 +1,5 @@ > .. SPDX-License-Identifier: BSD-3-Clause > - Copyright(c) 2010-2014 Intel Corporation. > + Copyright(c) 2010-2022 Intel Corporation. > > .. _multi_process_app: > > @@ -111,7 +111,7 @@ How the Application Works > The core of this example application is based on using two queues and a single memory pool in shared memory. > These three objects are created at startup by the primary process, > since the secondary process cannot create objects in memory as it cannot reserve memory zones, > -and the secondary process then uses lookup functions to attach to these objects as it starts up. > +thus the secondary process uses lookup functions to attach to these objects as it starts up. > > .. literalinclude:: ../../../examples/multi_process/simple_mp/main.c > :language: c > @@ -119,25 +119,25 @@ and the secondary process then uses lookup functions to attach to these objects > :end-before: >8 End of ring structure. > :dedent: 1 > > -Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process. > +Note, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process. > > Once the rings and memory pools are all available in both the primary and secondary processes, > the application simply dedicates two threads to sending and receiving messages respectively. > -The receive thread simply dequeues any messages on the receive ring, prints them, > -and frees the buffer space used by the messages back to the memory pool. > -The send thread makes use of the command-prompt library to interactively request user input for messages to send. > -Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents, > -then enqueued on the appropriate rte_ring. > +The receiver thread simply dequeues any messages on the receive ring and prints out in terminal, > +then the buffer space used by the messages is released back to the memory pool. > +The sender thread makes use of the command-prompt library to interactively request user input for messages to send. > +Once a send command is issued, the message contents are put into a buffer that was allocated from the memory pool, > +which is then enqueued on the appropriate rte_ring. > > Symmetric Multi-process Example > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > -The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel, > -with each process performing the same set of packet- processing operations. > -(Since each process is identical in functionality to the others, > -we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing - > -such as a client-server mode of operation seen in the next example, > -where different processes perform different tasks, yet co-operate to form a packet-processing system.) > +The second DPDK multi-process example demonstrates how a set of processes can run in parallel, > +where each process is performing the same set of packet-processing operations. > +(As each process is identical in functionality to the others, > +we refer to this as symmetric multi-processing. In the asymmetric multi-processing example, > +the different client-server mode processes perform different tasks, > +yet co-operate to form a packet-processing system.) > The following diagram shows the data-flow through the application, using two processes. > > .. _figure_sym_multi_proc_app: > @@ -155,9 +155,8 @@ Similarly, each process writes outgoing packets to a different TX queue on each > Running the Application > ^^^^^^^^^^^^^^^^^^^^^^^ > > -As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance, > -though with a number of other application- specific parameters also provided after the EAL arguments. > -These additional parameters are: > +The first instance of the symmetric_mp process must be run as the primary instance, > +with the following application parameters: > > * -p , where portmask is a hexadecimal bitmask of what ports on the system are to be used. > For example: -p 3 to use ports 0 and 1 only. > @@ -169,7 +168,7 @@ These additional parameters are: > This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port. > > The secondary symmetric_mp instances must also have these parameters specified, > -and the first two must be the same as those passed to the primary instance, or errors result. > +and the and parameters need to be configured with the same values as the primary instance. > > For example, to run a set of four symmetric_mp instances, running on lcores 1-4, > all performing level-2 forwarding of packets between ports 0 and 1, > @@ -202,7 +201,7 @@ How the Application Works > ^^^^^^^^^^^^^^^^^^^^^^^^^ > > The initialization calls in both the primary and secondary instances are the same for the most part, > -calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices. > +calling the rte_eal_init(), 1G and 10G driver initialization and then probing devices. > Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance. > > In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized - > @@ -217,7 +216,7 @@ therefore will be accessible by the secondary process as it initializes. > :dedent: 1 > > In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used, > -giving the secondary process access to the hardware and software rings for each network port. > +giving the secondary process is able to access to the hardware and software rings for each network port. > Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name: > > .. code-block:: c > @@ -234,7 +233,7 @@ Client-Server Multi-process Example > The third example multi-process application included with the DPDK shows how one can > use a client-server type multi-process design to do packet processing. > In this example, a single server process performs the packet reception from the ports being used and > -distributes these packets using round-robin ordering among a set of client processes, > +distributes these packets using round-robin ordering among a set of client processes, > which perform the actual packet processing. > In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port. > > @@ -250,8 +249,8 @@ The following diagram shows the data-flow through the application, using two cli > Running the Application > ^^^^^^^^^^^^^^^^^^^^^^^ > > -The server process must be run initially as the primary process to set up all memory structures for use by the clients. > -In addition to the EAL parameters, the application- specific parameters are: > +The server process must be run initially as the primary process to set up all memory structures for use by the client processes. > +In addition to the EAL parameters, the application-specific parameters are: > > * -p , where portmask is a hexadecimal bitmask of what ports on the system are to be used. > For example: -p 3 to use ports 0 and 1 only. > @@ -285,23 +284,23 @@ the following commands could be used: > How the Application Works > ^^^^^^^^^^^^^^^^^^^^^^^^^ > > -The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary. > -One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory. > -This eliminates the need for the client processes to have the portmask parameter passed into them on the command line, > -as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors. > +The server process performs the network port and data structure initialization similar to the primary symmetric multi-process application. > +The server process stores port configuration data in a memory zone in hugepage shared memory, this eliminates > +the need for the client processes to have the same portmask parameter in the command line. > +This enhancement can be done for the symmetric multi-process application in the future. > > In the same way that the server process is designed to be run as a primary process instance only, > the client processes are designed to be run as secondary instances only. > -They have no code to attempt to create shared memory objects. > -Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup(). > -The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus, > -which will, as in the symmetric multi-process example, > -automatically get access to the network ports using the settings already configured by the primary/server process. > +The client process does not support creating shared memory objects. > +Instead, the client process can access required rings and memory pools via rte_ring_lookup() and rte_mempool_lookup() function calls. > +The available network ports use by the processes are obtained by loading the network port drivers and probing the PCI bus. > +Same as the implementation in the symmetric multi-process example, the client process automatically gets > +access to the network ports settings where configured by the primary/server process. > > -Once all applications are initialized, the server operates by reading packets from each network port in turn and > +Once all applications are initialized, the server operates by reading packets from each network port in turns and > distributing those packets to the client queues (software rings, one for each client process) in round-robin order. > On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port. > -The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa. > +The routing used is very simple, all packets received on the first NIC port are transmitted back out on the second port and vice versa. > Similarly, packets are routed between the 3rd and 4th network ports and so on. > The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process. > >