patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution
       [not found] <20211026095037.17557-1-getelson@nvidia.com>
@ 2021-11-09  9:58 ` Gregory Etelson
  2021-11-09 11:35   ` Thomas Monjalon
  2021-11-10 16:52 ` [PATCH v3] " Gregory Etelson
  2021-11-10 16:57 ` [PATCH v4] examples/multi_process: " Gregory Etelson
  2 siblings, 1 reply; 7+ messages in thread
From: Gregory Etelson @ 2021-11-09  9:58 UTC (permalink / raw)
  To: dev, getelson; +Cc: matan, rasland, thomas, stable, Anatoly Burakov

MP servers distributes Rx packets between clients according to
round-robin scheme.

Current implementation always started packets distribution from
the first client. That procedure resulted in uniform distribution
in cases when Rx packets number was around clients number
multiplication. However, if RX burst repeatedly returned single
packet, round-robin scheme would not work because all packets
were assigned to the first client only.

The patch does not restart packets distribution from
the first client.
Packets distribution always continues to the next client.

Cc: stable@dpdk.org

Fixes: af75078fece3 ("first public release")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
v2: Remove explisit static variable initialization.
---
 examples/multi_process/client_server_mp/mp_server/main.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index b4761ebc7b..31e7e76706 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -234,7 +234,12 @@ process_packets(uint32_t port_num __rte_unused,
 		struct rte_mbuf *pkts[], uint16_t rx_count)
 {
 	uint16_t i;
-	uint8_t client = 0;
+	/*
+	 * C99: All objects with static storage duration
+	 * shall be initialized (set to their initial values) before
+	 * program startup.
+	 */
+	static uint8_t client;
 
 	for (i = 0; i < rx_count; i++) {
 		enqueue_rx_packet(client, pkts[i]);
-- 
2.33.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution
  2021-11-09  9:58 ` [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution Gregory Etelson
@ 2021-11-09 11:35   ` Thomas Monjalon
  2021-11-09 11:49     ` Gregory Etelson
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Monjalon @ 2021-11-09 11:35 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dev, matan, rasland, stable, Anatoly Burakov, david.marchand

09/11/2021 10:58, Gregory Etelson:
> -	uint8_t client = 0;
> +	/*
> +	 * C99: All objects with static storage duration
> +	 * shall be initialized (set to their initial values) before
> +	 * program startup.
> +	 */

Why adding this comment?

> +	static uint8_t client;




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution
  2021-11-09 11:35   ` Thomas Monjalon
@ 2021-11-09 11:49     ` Gregory Etelson
  2021-11-09 14:17       ` Thomas Monjalon
  0 siblings, 1 reply; 7+ messages in thread
From: Gregory Etelson @ 2021-11-09 11:49 UTC (permalink / raw)
  To: NBU-Contact-Thomas Monjalon
  Cc: dev, Matan Azrad, Raslan Darawsheh, stable, Anatoly Burakov,
	david.marchand

Hello Thomas,

> 09/11/2021 10:58, Gregory Etelson:
> > -     uint8_t client = 0;
> > +     /*
> > +      * C99: All objects with static storage
> duration
> > +      * shall be initialized (set to their initial
> values) before
> > +      * program startup.
> > +      */
> 
> Why adding this comment?
> 
> > +     static uint8_t client;
> 
> 

C99 optimization that was used here is not obvious.
The patch relies on client=0 initialization.
I added the comment to clarify why the client was not initialized.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution
  2021-11-09 11:49     ` Gregory Etelson
@ 2021-11-09 14:17       ` Thomas Monjalon
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2021-11-09 14:17 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dev, Matan Azrad, Raslan Darawsheh, stable, Anatoly Burakov,
	david.marchand

09/11/2021 12:49, Gregory Etelson:
> Hello Thomas,
> 
> > 09/11/2021 10:58, Gregory Etelson:
> > > -     uint8_t client = 0;
> > > +     /*
> > > +      * C99: All objects with static storage
> > duration
> > > +      * shall be initialized (set to their initial
> > values) before
> > > +      * program startup.
> > > +      */
> > 
> > Why adding this comment?
> > 
> > > +     static uint8_t client;
> > 
> > 
> 
> C99 optimization that was used here is not obvious.
> The patch relies on client=0 initialization.
> I added the comment to clarify why the client was not initialized.

I think it is the C standard, not only C99.
As far as I know, having static as 0 is obvious for a lot of people.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3] examples/multi_proces: fix Rx packets distribution
       [not found] <20211026095037.17557-1-getelson@nvidia.com>
  2021-11-09  9:58 ` [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution Gregory Etelson
@ 2021-11-10 16:52 ` Gregory Etelson
  2021-11-10 16:57 ` [PATCH v4] examples/multi_process: " Gregory Etelson
  2 siblings, 0 replies; 7+ messages in thread
From: Gregory Etelson @ 2021-11-10 16:52 UTC (permalink / raw)
  To: dev, getelson; +Cc: matan, rasland, thomas, stable, Anatoly Burakov

MP servers distributes Rx packets between clients according to
round-robin scheme.

Current implementation always started packets distribution from
the first client. That procedure resulted in uniform distribution
in cases when Rx packets number was around clients number
multiplication. However, if RX burst repeatedly returned single
packet, round-robin scheme would not work because all packets
were assigned to the first client only.

The patch does not restart packets distribution from
the first client.
Packets distribution always continues to the next client.

Cc: stable@dpdk.org

Fixes: af75078fece3 ("first public release")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
v2: Remove explisit static variable initialization.
v3: Remove comment.
---
 examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index b4761ebc7b..f54bb8b75a 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused,
 		struct rte_mbuf *pkts[], uint16_t rx_count)
 {
 	uint16_t i;
-	uint8_t client = 0;
+	static uint8_t client;
 
 	for (i = 0; i < rx_count; i++) {
 		enqueue_rx_packet(client, pkts[i]);
-- 
2.33.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4] examples/multi_process: fix Rx packets distribution
       [not found] <20211026095037.17557-1-getelson@nvidia.com>
  2021-11-09  9:58 ` [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution Gregory Etelson
  2021-11-10 16:52 ` [PATCH v3] " Gregory Etelson
@ 2021-11-10 16:57 ` Gregory Etelson
  2021-11-16 15:07   ` David Marchand
  2 siblings, 1 reply; 7+ messages in thread
From: Gregory Etelson @ 2021-11-10 16:57 UTC (permalink / raw)
  To: dev, getelson; +Cc: matan, rasland, thomas, stable, Anatoly Burakov

MP servers distributes Rx packets between clients according to
round-robin scheme.

Current implementation always started packets distribution from
the first client. That procedure resulted in uniform distribution
in cases when Rx packets number was around clients number
multiplication. However, if RX burst repeatedly returned single
packet, round-robin scheme would not work because all packets
were assigned to the first client only.

The patch does not restart packets distribution from
the first client.
Packets distribution always continues to the next client.

Cc: stable@dpdk.org

Fixes: af75078fece3 ("first public release")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
v2: Remove explisit static variable initialization.
v3: Remove comment.
v4: Spell check.
---
 examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index b4761ebc7b..f54bb8b75a 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused,
 		struct rte_mbuf *pkts[], uint16_t rx_count)
 {
 	uint16_t i;
-	uint8_t client = 0;
+	static uint8_t client;
 
 	for (i = 0; i < rx_count; i++) {
 		enqueue_rx_packet(client, pkts[i]);
-- 
2.33.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4] examples/multi_process: fix Rx packets distribution
  2021-11-10 16:57 ` [PATCH v4] examples/multi_process: " Gregory Etelson
@ 2021-11-16 15:07   ` David Marchand
  0 siblings, 0 replies; 7+ messages in thread
From: David Marchand @ 2021-11-16 15:07 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dev, Matan Azrad, Raslan Darawsheh, Thomas Monjalon, dpdk stable,
	Anatoly Burakov

On Wed, Nov 10, 2021 at 5:58 PM Gregory Etelson <getelson@nvidia.com> wrote:
>
> MP servers distributes Rx packets between clients according to
> round-robin scheme.
>
> Current implementation always started packets distribution from
> the first client. That procedure resulted in uniform distribution
> in cases when Rx packets number was around clients number
> multiplication. However, if RX burst repeatedly returned single
> packet, round-robin scheme would not work because all packets
> were assigned to the first client only.
>
> The patch does not restart packets distribution from
> the first client.
> Packets distribution always continues to the next client.
>
> Fixes: af75078fece3 ("first public release")
> Cc: stable@dpdk.org
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>

Applied, thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-11-16 15:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20211026095037.17557-1-getelson@nvidia.com>
2021-11-09  9:58 ` [dpdk-stable] [PATCH v2] examples/multi_proces: fix Rx packets distribution Gregory Etelson
2021-11-09 11:35   ` Thomas Monjalon
2021-11-09 11:49     ` Gregory Etelson
2021-11-09 14:17       ` Thomas Monjalon
2021-11-10 16:52 ` [PATCH v3] " Gregory Etelson
2021-11-10 16:57 ` [PATCH v4] examples/multi_process: " Gregory Etelson
2021-11-16 15:07   ` David Marchand

patches for DPDK stable branches

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/stable/0 stable/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 stable stable/ http://inbox.dpdk.org/stable \
		stable@dpdk.org
	public-inbox-index stable

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.stable


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git