From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A38045AB0; Sat, 5 Oct 2024 00:10:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3BD8A402E6; Sat, 5 Oct 2024 00:10:53 +0200 (CEST) Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by mails.dpdk.org (Postfix) with ESMTP id 479B14029A for ; Sat, 5 Oct 2024 00:10:52 +0200 (CEST) Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-20b1335e4e4so25937875ad.0 for ; Fri, 04 Oct 2024 15:10:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1728079851; x=1728684651; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FXqYTtSG9d8V8w7eeDW7io9YiuAG6zGsBq+q2gnMhQo=; b=lm+hDsCBrEXfThLnRM6uAz4Dx1bA3vsWpyaTvoxbPmw1qU28wkPF7HtCKRcX1eHu/K bqe4gLQ2A+fUMwOFZvrlzt9JrvcT5afmtoDAO88VptctZ3r9KpYVgIfv2b/pz10Tfeug yEyxHMQETB373rtX7O7lnJuU5oS3vE4igIdtlZuD87fTm7A1pf280gT7lfjWf2Q0djf/ eUxAQCNP+7RQGfLZWfgB2vklaAq5ToZbnxo6KkwbRO6xtTCW4IKuyvnD4biOt3v8fsOe LbASXzEbTKPdxY1m/z4Ogusxtm/x6XnBMjiyUrMeBU1mghLK/8z5F4w7HOMvIuMAOOl8 nCSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728079851; x=1728684651; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FXqYTtSG9d8V8w7eeDW7io9YiuAG6zGsBq+q2gnMhQo=; b=MIPkEnZJCXsvP/65XrC+RzJTZH1m2shX+7EPrQ0nUuKFTcXMM2p6spFaN60s8CRwms zB3/jJoRvvWLm5Z85/ePKC3XFul5TELt0Y6Y/k1dV482fR8am9JWaOvMSYRXr2iKOdB+ IYBUnT4JMXQr50JzwH9IgLWt3SfDYlRwksZXkFtWTH5JIaV/wUJW0kX+QiLDD8BXRrRR NxCjYIezJgMpTqjiBxvuvEye27VpVMWeij6hPGDrU2W0UEEtF+zje7VtQtLw9+PhJeSQ zfBTPKe61lsEU/Zf3rOFDXkq3DiEEnGiU+N6jN7WIfhqsQs0fQKzS3NUdrWaVeoMDQMT AOMQ== X-Gm-Message-State: AOJu0Yy+sdMIoFyyhWqEWaP6xook3wmpeQcsofwTM4x8FwUAzHwYmH7F aJY+rVr3IBhoDDI2Kpl61hVVYQUg8cayDUp/4N0Fo6T6S2B8rNg+30vGKpq2FoLBygp4ZvuHpo9 65Bo+UA== X-Google-Smtp-Source: AGHT+IEwc5610jM4+kxAu6r2OTPwCAQWZAKChAncnjeFoVVzhDkb6WMuuBTIQxh3ZJyUI0WmCElkDQ== X-Received: by 2002:a17:902:d504:b0:20b:861a:25d3 with SMTP id d9443c01a7336-20bfdfd9423mr70445375ad.21.1728079850954; Fri, 04 Oct 2024 15:10:50 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20c13969461sm3111935ad.219.2024.10.04.15.10.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Oct 2024 15:10:50 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v2] doc/multi-process: fix grammar and phrasing Date: Fri, 4 Oct 2024 15:10:41 -0700 Message-ID: <20241004221041.67439-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20220601095719.1168-1-kai.ji@intel.com> References: <20220601095719.1168-1-kai.ji@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Simplify awkward wording in description of the multi process application. Signed-off-by: Stephen Hemminger --- doc/guides/sample_app_ug/multi_process.rst | 168 ++++++++------------- 1 file changed, 61 insertions(+), 107 deletions(-) diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst index c53331def3..ae66015ae8 100644 --- a/doc/guides/sample_app_ug/multi_process.rst +++ b/doc/guides/sample_app_ug/multi_process.rst @@ -6,14 +6,14 @@ Multi-process Sample Application ================================ -This chapter describes the example applications for multi-processing that are included in the DPDK. +This chapter describes example multi-processing applications that are included in the DPDK. Example Applications -------------------- Building the Sample Applications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The multi-process example applications are built in the same way as other sample applications, +The multi-process example applications are built the same way as other sample applications, and as documented in the *DPDK Getting Started Guide*. @@ -23,21 +23,20 @@ The applications are located in the ``multi_process`` sub-directory. .. note:: - If just a specific multi-process application needs to be built, - the final make command can be run just in that application's directory, - rather than at the top-level multi-process directory. + If only a specific multi-process application needs to be built, + the final make command can be run just in that application's directory. Basic Multi-process Example ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The examples/simple_mp folder in the DPDK release contains a basic example application to demonstrate how -two DPDK processes can work together using queues and memory pools to share information. +The examples/simple_mp folder contains a basic example application that demonstrates how +two DPDK processes can work together to share information using queues and memory pools. Running the Application ^^^^^^^^^^^^^^^^^^^^^^^ -To run the application, start one copy of the simple_mp binary in one terminal, -passing at least two cores in the coremask/corelist, as follows: +To run the application, start simple_mp binary in one terminal, +passing at least two cores in the coremask/corelist: .. code-block:: console @@ -79,12 +78,11 @@ again run the same binary setting at least two cores in the coremask/corelist: .//examples/dpdk-simple_mp -l 2-3 -n 4 --proc-type=secondary -When running a secondary process such as that shown above, the proc-type parameter can again be specified as auto. -However, omitting the parameter altogether will cause the process to try and start as a primary rather than secondary process. +When running a secondary process such as above, the proc-type parameter can be specified as auto. +Omitting the parameter will cause the process to try and start as a primary rather than secondary process. -Once the process type is specified correctly, -the process starts up, displaying largely similar status messages to the primary instance as it initializes. -Once again, you will be presented with a command prompt. +The process starts up, displaying similar status messages to the primary instance as it initializes +then prints a command prompt. Once both processes are running, messages can be sent between them using the send command. At any stage, either process can be terminated using the quit command. @@ -108,10 +106,8 @@ At any stage, either process can be terminated using the quit command. How the Application Works ^^^^^^^^^^^^^^^^^^^^^^^^^ -The core of this example application is based on using two queues and a single memory pool in shared memory. -These three objects are created at startup by the primary process, -since the secondary process cannot create objects in memory as it cannot reserve memory zones, -and the secondary process then uses lookup functions to attach to these objects as it starts up. +This application uses two queues and a single memory pool created in the primary process.. +The secondary process then uses lookup functions to attach to these objects. .. literalinclude:: ../../../examples/multi_process/simple_mp/main.c :language: c @@ -121,23 +117,20 @@ and the secondary process then uses lookup functions to attach to these objects Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process. -Once the rings and memory pools are all available in both the primary and secondary processes, -the application simply dedicates two threads to sending and receiving messages respectively. -The receive thread simply dequeues any messages on the receive ring, prints them, -and frees the buffer space used by the messages back to the memory pool. -The send thread makes use of the command-prompt library to interactively request user input for messages to send. -Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents, -then enqueued on the appropriate rte_ring. +The application has two threads: + +sender + Reads from stdin, converts them to messages, and enqueues them to the ring. + +receiver + Dequeues any messages on the ring, prints them, then frees the buffer. + Symmetric Multi-process Example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel, +The symmetric multi process example demonstrates how a set of processes can run in parallel, with each process performing the same set of packet- processing operations. -(Since each process is identical in functionality to the others, -we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing - -such as a client-server mode of operation seen in the next example, -where different processes perform different tasks, yet co-operate to form a packet-processing system.) The following diagram shows the data-flow through the application, using two processes. .. _figure_sym_multi_proc_app: @@ -147,33 +140,27 @@ The following diagram shows the data-flow through the application, using two pro Example Data Flow in a Symmetric Multi-process Application -As the diagram shows, each process reads packets from each of the network ports in use. -RSS is used to distribute incoming packets on each port to different hardware RX queues. +Each process reads packets from each of the network ports in use. +RSS distributes incoming packets on each port to different hardware RX queues. Each process reads a different RX queue on each port and so does not contend with any other process for that queue access. -Similarly, each process writes outgoing packets to a different TX queue on each port. +Each process writes outgoing packets to a different TX queue on each port. Running the Application ^^^^^^^^^^^^^^^^^^^^^^^ -As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance, -though with a number of other application- specific parameters also provided after the EAL arguments. -These additional parameters are: +The first instance of the symmetric_mp process is the primary instance, with the EAL arguments: -* -p , where portmask is a hexadecimal bitmask of what ports on the system are to be used. +* -p , the portmask is a hexadecimal bitmask of what ports on the system are to be used. For example: -p 3 to use ports 0 and 1 only. -* --num-procs , where N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing. +* --num-procs , the N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing. This parameter is used to configure the appropriate number of receive queues on each network port. * --proc-id , where n is a numeric value in the range 0 <= n < N (number of processes, specified above). This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port. -The secondary symmetric_mp instances must also have these parameters specified, -and the first two must be the same as those passed to the primary instance, or errors result. - -For example, to run a set of four symmetric_mp instances, running on lcores 1-4, -all performing level-2 forwarding of packets between ports 0 and 1, -the following commands can be used (assuming run as root): +The secondary instance must be started same parameters must be started with the similar EAL parameters. +Example: .. code-block:: console @@ -184,31 +171,13 @@ the following commands can be used (assuming run as root): .. note:: - In the above example, the process type can be explicitly specified as primary or secondary, rather than auto. - When using auto, the first process run creates all the memory structures needed for all processes - - irrespective of whether it has a proc-id of 0, 1, 2 or 3. + In the above example, auto is used so the first instance becomes the primary process. -.. note:: - - For the symmetric multi-process example, since all processes work in the same manner, - once the hugepage shared memory and the network ports are initialized, - it is not necessary to restart all processes if the primary instance dies. - Instead, that process can be restarted as a secondary, - by explicitly setting the proc-type to secondary on the command line. - (All subsequent instances launched will also need this explicitly specified, - as auto-detection will detect no primary processes running and therefore attempt to re-initialize shared memory.) How the Application Works ^^^^^^^^^^^^^^^^^^^^^^^^^ -The initialization calls in both the primary and secondary instances are the same for the most part, -calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices. -Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance. - -In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized - -the number of RX and TX queues per port being determined by the num-procs parameter passed on the command-line. -The structures for the initialized network ports are stored in shared memory and -therefore will be accessible by the secondary process as it initializes. +The primary instance creates the memory pool and initializes the network ports. .. literalinclude:: ../../../examples/multi_process/symmetric_mp/main.c :language: c @@ -216,27 +185,27 @@ therefore will be accessible by the secondary process as it initializes. :end-before: >8 End of primary instance initialization. :dedent: 1 -In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used, -giving the secondary process access to the hardware and software rings for each network port. -Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name: +The secondary instance gets the port information and exported by the primary process. +The memory pool is accessed by doing a lookup for it by name: .. code-block:: c - mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... ) + if (proc_type == RTE_PROC_SECONDARY) + mp = rte_mempool_lookup(_SMP_MBUF_POOL); + else + mp = rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... ) -Once this initialization is complete, the main loop of each process, both primary and secondary, -is exactly the same - each process reads from each port using the queue corresponding to its proc-id parameter, +The main loop of each process, both primary and secondary, is the same. +Each process reads from each port using the queue corresponding to its proc-id parameter, and writes to the corresponding transmit queue on the output port. Client-Server Multi-process Example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The third example multi-process application included with the DPDK shows how one can -use a client-server type multi-process design to do packet processing. -In this example, a single server process performs the packet reception from the ports being used and -distributes these packets using round-robin ordering among a set of client processes, -which perform the actual packet processing. -In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port. +The example multi-process application demonstrates a client-server type multi-process design. +A single server process receives a set of packets from the ports and distributes these packets using round-robin +ordering to the client processes, +Each client processes packets and does level-2 forwarding by sending each packet out on a different network port. The following diagram shows the data-flow through the application, using two client processes. @@ -250,7 +219,7 @@ The following diagram shows the data-flow through the application, using two cli Running the Application ^^^^^^^^^^^^^^^^^^^^^^^ -The server process must be run initially as the primary process to set up all memory structures for use by the clients. +The server process must be run as the primary process to set up all memory structures. In addition to the EAL parameters, the application- specific parameters are: * -p , where portmask is a hexadecimal bitmask of what ports on the system are to be used. @@ -261,14 +230,14 @@ In addition to the EAL parameters, the application- specific parameters are: .. note:: - In the server process, a single thread, the main thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O. - If a coremask/corelist is specified with more than a single lcore bit set in it, - an additional lcore will be used for a thread to periodically print packet count statistics. + In the server process, has a single thread using the lowest numbered lcore in the coremask/corelist, performs all packet I/O. + If coremask/corelist parameter specifies with more than a single lcore bit set, + an additional lcore will be used for a thread to print packet count periodically. -Since the server application stores configuration data in shared memory, including the network ports to be used, -the only application parameter needed by a client process is its client instance ID. -Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4, -the following commands could be used: +The server application stores configuration data in shared memory, including the network ports used. +The only application parameter needed by a client process is its client instance ID. +To run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4, +the commands are: .. code-block:: console @@ -285,27 +254,12 @@ the following commands could be used: How the Application Works ^^^^^^^^^^^^^^^^^^^^^^^^^ -The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary. -One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory. -This eliminates the need for the client processes to have the portmask parameter passed into them on the command line, -as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors. - -In the same way that the server process is designed to be run as a primary process instance only, -the client processes are designed to be run as secondary instances only. -They have no code to attempt to create shared memory objects. -Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup(). -The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus, -which will, as in the symmetric multi-process example, -automatically get access to the network ports using the settings already configured by the primary/server process. - -Once all applications are initialized, the server operates by reading packets from each network port in turn and -distributing those packets to the client queues (software rings, one for each client process) in round-robin order. -On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port. -The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa. -Similarly, packets are routed between the 3rd and 4th network ports and so on. -The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process. +The server (primary) process performs the initialization of network port and data structure and +stores its port configuration data in a memory zone in hugepage shared memory. +The client process does not need the portmask parameter passed in via the command line. +The server process is the primary process, and the client processes are secondary processes. -In both the server and the client processes, outgoing packets are buffered before being sent, -so as to allow the sending of multiple packets in a single burst to improve efficiency. -For example, the client process will buffer packets to send, -until either the buffer is full or until we receive no further packets from the server. +The server operates by reading packets from each network port and distributing those packets to the client queues. +The client reads from the ring and routes the packet to a different network port. +The routing used is very simple: all packets received on the first NIC port are transmitted back out on the second port and vice versa. +Similarly, packets are routed between the 3rd and 4th network ports and so on. -- 2.45.2