DPDK patches and discussions
 help / color / mirror / Atom feed
* RE: ask for TXA_FLUSH_THRESHOLD change
       [not found] <DB7PR07MB4489DB019CC14EBE5D4AC472F3AB9@DB7PR07MB4489.eurprd07.prod.outlook.com>
@ 2022-06-16  6:39 ` Jayatheerthan, Jay
  2022-06-22  2:13   ` Jaeeun Ham
  0 siblings, 1 reply; 4+ messages in thread
From: Jayatheerthan, Jay @ 2022-06-16  6:39 UTC (permalink / raw)
  To: Jaeeun Ham, dev; +Cc: Jerin Jacob

[-- Attachment #1: Type: text/plain, Size: 3054 bytes --]

Hi Jaeeun,
See my responses inline below.

-Jay


From: Jaeeun Ham <jaeeun.ham@ericsson.com>
Sent: Monday, June 13, 2022 5:51 AM
To: dev@dpdk.org
Cc: Jerin Jacob <jerinj@marvell.com>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
Subject: ask for TXA_FLUSH_THRESHOLD change

Hi,

There were latency delay when I increase dpdk(20.11.1) core. (one worker core was okay.)
When I decrease the TXA_FLUSH_THRESHOLD value(1024 to 32), it was okay

It's TXA_FLUSH_THRESHOLD at lib/librte_eventdev/rte_event_eth_tx_adapter.c. // https://git.dpdk.org/dpdk-stable/tree/lib/librte_eventdev/rte_event_eth_tx_adapter.c?h=20.11#n15
When TXA_FLUSH_THRESHOLD value was changed from 1024 to 32, the latency test result was fine on 10 cores for low traffic(DL:20Mbps/UL:17kbps).
I think that this can make call for rte_eth_tx_buffer_flush() more frequently.
But, I'm not sure whether this approach can cause worse performance or not.
Do you have any opinion about this?

[Jay] Yes, it will cause rte_eth_tx_buffer_flush() to be called more often. It can lead to lesser batching benefit. Typical performance vs. latency trade-off decision apply here.


Similar RDK RTE_BRIDGE_ETH_TX_FLUSH_THOLD is patched on DUSG3 from 1024 to smaller value since DPDK 18.11.2:
I'm not aware of any side-effect, I think it is needed to have low enough latency even at low traffic rates.  For more details see Intel FP 22288<https://footprints.intel.com/MRcgi/MRlogin.pl?DL=22288DA14>.

[Jay] Currently, TXA_FLUSH_THRESHOLD is not a configurable attr.

TXA_MAX_NB_TX(128) looks the same as CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE(16384), then is it also should be tuned?

[Jay] They both are different attributes. TXA_MAX_NB_TX refers to max number of queues in Tx adapter. CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE refers to event buffer size in Rx BD.


--- dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base.orig  2020-01-29 15:05:10.000000000 +0100
+++ dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base   2020-01-29 15:11:10.000000000 +0100
@@ -566,9 +566,9 @@
CONFIG_RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES=100
CONFIG_RTE_MAX_BRIDGE_ETH_INSTANCE=4
CONFIG_RTE_BRIDGE_ETH_INTR_RING_SIZE=32
-CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=128
+CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=16384
CONFIG_RTE_LIBRTE_BRIDGE_ETH_DEBUG=n
-CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=1024
+CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=32

--- dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-05 23:46:52.051051000 +0200
+++ dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-06 00:50:07.310766255 +0200
@@ -175,8 +175,8 @@
#define RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES 100
#define RTE_MAX_BRIDGE_ETH_INSTANCE 4
#define RTE_BRIDGE_ETH_INTR_RING_SIZE 32
-#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 128
-#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 1024
+#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 16384
+#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 10
#undef RTE_BRIDGE_ETH_TX_MULTI_PKT_EVENT


BR/Jaeeun


[-- Attachment #2: Type: text/html, Size: 14437 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: ask for TXA_FLUSH_THRESHOLD change
  2022-06-16  6:39 ` ask for TXA_FLUSH_THRESHOLD change Jayatheerthan, Jay
@ 2022-06-22  2:13   ` Jaeeun Ham
  2022-06-25  3:19     ` Jaeeun Ham
  0 siblings, 1 reply; 4+ messages in thread
From: Jaeeun Ham @ 2022-06-22  2:13 UTC (permalink / raw)
  To: Jayatheerthan, Jay, dev; +Cc: Jerin Jacob

[-- Attachment #1: Type: text/plain, Size: 3935 bytes --]

Hi,

Could you guide me on how to eliminate or reduce tx drop/retry?
TXA_FLUSH_THRESHOLD helped somewhat but it was not cleared packet tx drop.

========[ TX adapter stats ]========
tx_retry: 17499893
tx_packets: 7501716
tx_dropped: 5132458

BR/Jaeeun

From: Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
Sent: Thursday, June 16, 2022 3:40 PM
To: Jaeeun Ham <jaeeun.ham@ericsson.com>; dev@dpdk.org
Cc: Jerin Jacob <jerinj@marvell.com>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change

Hi Jaeeun,
See my responses inline below.

-Jay


From: Jaeeun Ham <jaeeun.ham@ericsson.com<mailto:jaeeun.ham@ericsson.com>>
Sent: Monday, June 13, 2022 5:51 AM
To: dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Jerin Jacob <jerinj@marvell.com<mailto:jerinj@marvell.com>>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com<mailto:jay.jayatheerthan@intel.com>>
Subject: ask for TXA_FLUSH_THRESHOLD change

Hi,

There were latency delay when I increase dpdk(20.11.1) core. (one worker core was okay.)
When I decrease the TXA_FLUSH_THRESHOLD value(1024 to 32), it was okay

It's TXA_FLUSH_THRESHOLD at lib/librte_eventdev/rte_event_eth_tx_adapter.c. // https://git.dpdk.org/dpdk-stable/tree/lib/librte_eventdev/rte_event_eth_tx_adapter.c?h=20.11#n15<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-dff86f66a37f4e0e&q=1&e=9d9b2326-a7ae-405b-bc88-9a28d777c306&u=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Ftree%2Flib%2Flibrte_eventdev%2Frte_event_eth_tx_adapter.c%3Fh%3D20.11%23n15>
When TXA_FLUSH_THRESHOLD value was changed from 1024 to 32, the latency test result was fine on 10 cores for low traffic(DL:20Mbps/UL:17kbps).
I think that this can make call for rte_eth_tx_buffer_flush() more frequently.
But, I'm not sure whether this approach can cause worse performance or not.
Do you have any opinion about this?

[Jay] Yes, it will cause rte_eth_tx_buffer_flush() to be called more often. It can lead to lesser batching benefit. Typical performance vs. latency trade-off decision apply here.


Similar RDK RTE_BRIDGE_ETH_TX_FLUSH_THOLD is patched on DUSG3 from 1024 to smaller value since DPDK 18.11.2:
I'm not aware of any side-effect, I think it is needed to have low enough latency even at low traffic rates.  For more details see Intel FP 22288<https://footprints.intel.com/MRcgi/MRlogin.pl?DL=22288DA14>.

[Jay] Currently, TXA_FLUSH_THRESHOLD is not a configurable attr.

TXA_MAX_NB_TX(128) looks the same as CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE(16384), then is it also should be tuned?

[Jay] They both are different attributes. TXA_MAX_NB_TX refers to max number of queues in Tx adapter. CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE refers to event buffer size in Rx BD.


--- dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base.orig  2020-01-29 15:05:10.000000000 +0100
+++ dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base   2020-01-29 15:11:10.000000000 +0100
@@ -566,9 +566,9 @@
CONFIG_RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES=100
CONFIG_RTE_MAX_BRIDGE_ETH_INSTANCE=4
CONFIG_RTE_BRIDGE_ETH_INTR_RING_SIZE=32
-CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=128
+CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=16384
CONFIG_RTE_LIBRTE_BRIDGE_ETH_DEBUG=n
-CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=1024
+CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=32

--- dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-05 23:46:52.051051000 +0200
+++ dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-06 00:50:07.310766255 +0200
@@ -175,8 +175,8 @@
#define RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES 100
#define RTE_MAX_BRIDGE_ETH_INSTANCE 4
#define RTE_BRIDGE_ETH_INTR_RING_SIZE 32
-#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 128
-#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 1024
+#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 16384
+#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 10
#undef RTE_BRIDGE_ETH_TX_MULTI_PKT_EVENT


BR/Jaeeun


[-- Attachment #2: Type: text/html, Size: 17294 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: ask for TXA_FLUSH_THRESHOLD change
  2022-06-22  2:13   ` Jaeeun Ham
@ 2022-06-25  3:19     ` Jaeeun Ham
  2022-06-27  6:23       ` Jayatheerthan, Jay
  0 siblings, 1 reply; 4+ messages in thread
From: Jaeeun Ham @ 2022-06-25  3:19 UTC (permalink / raw)
  To: Jayatheerthan, Jay, dev; +Cc: Jerin Jacob

[-- Attachment #1: Type: text/plain, Size: 5340 bytes --]

Hi,

Packets seem to be silently discarded in the RX NIC while waiting in the buffer, and it does not happen when I use only one worker core.
I think multiple worker cores packet processing would have this problem to handle heavy traffic in parallel.

I want to increase buffer size to solve this as below, then is it right to change only modify the number from 4 to 8?
I am trying to gather DPDK and PMD driver log such as "--log-level="7" --log-level="pmd.net,7". Could you recommend how to set the log-level string to confirm silent packet drop?

dpdk-stable-20.11.1/lib/librte_eventdev/rte_event_eth_rx_adapter.c

#define BATCH_SIZE      32
#define BLOCK_CNT_THRESHOLD 10
#define ETH_EVENT_BUFFER_SIZE   (4*BATCH_SIZE)  // e.g. (8*BATCH_SIZE)

#define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32
#define ETH_RX_ADAPTER_MEM_NAME_LEN 32

#define RSS_KEY_SIZE    40
/* value written to intr thread pipe to signal thread exit */
#define ETH_BRIDGE_INTR_THREAD_EXIT 1
/* Sentinel value to detect initialized file handle */
#define INIT_FD     -1

BR/Jaeeun

From: Jaeeun Ham
Sent: Wednesday, June 22, 2022 11:14 AM
To: Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; dev@dpdk.org
Cc: Jerin Jacob <jerinj@marvell.com>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change

Hi,

Could you guide me on how to eliminate or reduce tx drop/retry?
TXA_FLUSH_THRESHOLD helped somewhat but it was not cleared packet tx drop.

========[ TX adapter stats ]========
tx_retry: 17499893
tx_packets: 7501716
tx_dropped: 5132458

BR/Jaeeun

From: Jayatheerthan, Jay <jay.jayatheerthan@intel.com<mailto:jay.jayatheerthan@intel.com>>
Sent: Thursday, June 16, 2022 3:40 PM
To: Jaeeun Ham <jaeeun.ham@ericsson.com<mailto:jaeeun.ham@ericsson.com>>; dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Jerin Jacob <jerinj@marvell.com<mailto:jerinj@marvell.com>>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change

Hi Jaeeun,
See my responses inline below.

-Jay


From: Jaeeun Ham <jaeeun.ham@ericsson.com<mailto:jaeeun.ham@ericsson.com>>
Sent: Monday, June 13, 2022 5:51 AM
To: dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Jerin Jacob <jerinj@marvell.com<mailto:jerinj@marvell.com>>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com<mailto:jay.jayatheerthan@intel.com>>
Subject: ask for TXA_FLUSH_THRESHOLD change

Hi,

There were latency delay when I increase dpdk(20.11.1) core. (one worker core was okay.)
When I decrease the TXA_FLUSH_THRESHOLD value(1024 to 32), it was okay

It's TXA_FLUSH_THRESHOLD at lib/librte_eventdev/rte_event_eth_tx_adapter.c. // https://git.dpdk.org/dpdk-stable/tree/lib/librte_eventdev/rte_event_eth_tx_adapter.c?h=20.11#n15<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-dff86f66a37f4e0e&q=1&e=9d9b2326-a7ae-405b-bc88-9a28d777c306&u=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Ftree%2Flib%2Flibrte_eventdev%2Frte_event_eth_tx_adapter.c%3Fh%3D20.11%23n15>
When TXA_FLUSH_THRESHOLD value was changed from 1024 to 32, the latency test result was fine on 10 cores for low traffic(DL:20Mbps/UL:17kbps).
I think that this can make call for rte_eth_tx_buffer_flush() more frequently.
But, I'm not sure whether this approach can cause worse performance or not.
Do you have any opinion about this?

[Jay] Yes, it will cause rte_eth_tx_buffer_flush() to be called more often. It can lead to lesser batching benefit. Typical performance vs. latency trade-off decision apply here.


Similar RDK RTE_BRIDGE_ETH_TX_FLUSH_THOLD is patched on DUSG3 from 1024 to smaller value since DPDK 18.11.2:
I'm not aware of any side-effect, I think it is needed to have low enough latency even at low traffic rates.  For more details see Intel FP 22288<https://footprints.intel.com/MRcgi/MRlogin.pl?DL=22288DA14>.

[Jay] Currently, TXA_FLUSH_THRESHOLD is not a configurable attr.

TXA_MAX_NB_TX(128) looks the same as CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE(16384), then is it also should be tuned?

[Jay] They both are different attributes. TXA_MAX_NB_TX refers to max number of queues in Tx adapter. CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE refers to event buffer size in Rx BD.


--- dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base.orig  2020-01-29 15:05:10.000000000 +0100
+++ dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base   2020-01-29 15:11:10.000000000 +0100
@@ -566,9 +566,9 @@
CONFIG_RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES=100
CONFIG_RTE_MAX_BRIDGE_ETH_INSTANCE=4
CONFIG_RTE_BRIDGE_ETH_INTR_RING_SIZE=32
-CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=128
+CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=16384
CONFIG_RTE_LIBRTE_BRIDGE_ETH_DEBUG=n
-CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=1024
+CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=32

--- dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-05 23:46:52.051051000 +0200
+++ dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-06 00:50:07.310766255 +0200
@@ -175,8 +175,8 @@
#define RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES 100
#define RTE_MAX_BRIDGE_ETH_INSTANCE 4
#define RTE_BRIDGE_ETH_INTR_RING_SIZE 32
-#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 128
-#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 1024
+#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 16384
+#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 10
#undef RTE_BRIDGE_ETH_TX_MULTI_PKT_EVENT


BR/Jaeeun


[-- Attachment #2: Type: text/html, Size: 23131 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: ask for TXA_FLUSH_THRESHOLD change
  2022-06-25  3:19     ` Jaeeun Ham
@ 2022-06-27  6:23       ` Jayatheerthan, Jay
  0 siblings, 0 replies; 4+ messages in thread
From: Jayatheerthan, Jay @ 2022-06-27  6:23 UTC (permalink / raw)
  To: Jaeeun Ham, dev; +Cc: Jerin Jacob

[-- Attachment #1: Type: text/plain, Size: 6026 bytes --]

Hi,
From DPDK 21.11, the Rx adapter event buffer size is configurable through API during creation time. Please refer to rte_event_eth_rx_adapter_create_with_params() API in the below link:
https://doc.dpdk.org/api/rte__event__eth__rx__adapter_8h.html

You could refer to EAL docs on how to use log-level option based on the PMD you are using.

-Jay



From: Jaeeun Ham <jaeeun.ham@ericsson.com>
Sent: Saturday, June 25, 2022 8:49 AM
To: Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; dev@dpdk.org
Cc: Jerin Jacob <jerinj@marvell.com>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change

Hi,

Packets seem to be silently discarded in the RX NIC while waiting in the buffer, and it does not happen when I use only one worker core.
I think multiple worker cores packet processing would have this problem to handle heavy traffic in parallel.

I want to increase buffer size to solve this as below, then is it right to change only modify the number from 4 to 8?
I am trying to gather DPDK and PMD driver log such as "--log-level="7" --log-level="pmd.net,7". Could you recommend how to set the log-level string to confirm silent packet drop?

dpdk-stable-20.11.1/lib/librte_eventdev/rte_event_eth_rx_adapter.c

#define BATCH_SIZE      32
#define BLOCK_CNT_THRESHOLD 10
#define ETH_EVENT_BUFFER_SIZE   (4*BATCH_SIZE)  // e.g. (8*BATCH_SIZE)

#define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32
#define ETH_RX_ADAPTER_MEM_NAME_LEN 32

#define RSS_KEY_SIZE    40
/* value written to intr thread pipe to signal thread exit */
#define ETH_BRIDGE_INTR_THREAD_EXIT 1
/* Sentinel value to detect initialized file handle */
#define INIT_FD     -1

BR/Jaeeun

From: Jaeeun Ham
Sent: Wednesday, June 22, 2022 11:14 AM
To: Jayatheerthan, Jay <jay.jayatheerthan@intel.com<mailto:jay.jayatheerthan@intel.com>>; dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Jerin Jacob <jerinj@marvell.com<mailto:jerinj@marvell.com>>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change

Hi,

Could you guide me on how to eliminate or reduce tx drop/retry?
TXA_FLUSH_THRESHOLD helped somewhat but it was not cleared packet tx drop.

========[ TX adapter stats ]========
tx_retry: 17499893
tx_packets: 7501716
tx_dropped: 5132458

BR/Jaeeun

From: Jayatheerthan, Jay <jay.jayatheerthan@intel.com<mailto:jay.jayatheerthan@intel.com>>
Sent: Thursday, June 16, 2022 3:40 PM
To: Jaeeun Ham <jaeeun.ham@ericsson.com<mailto:jaeeun.ham@ericsson.com>>; dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Jerin Jacob <jerinj@marvell.com<mailto:jerinj@marvell.com>>
Subject: RE: ask for TXA_FLUSH_THRESHOLD change

Hi Jaeeun,
See my responses inline below.

-Jay


From: Jaeeun Ham <jaeeun.ham@ericsson.com<mailto:jaeeun.ham@ericsson.com>>
Sent: Monday, June 13, 2022 5:51 AM
To: dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Jerin Jacob <jerinj@marvell.com<mailto:jerinj@marvell.com>>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com<mailto:jay.jayatheerthan@intel.com>>
Subject: ask for TXA_FLUSH_THRESHOLD change

Hi,

There were latency delay when I increase dpdk(20.11.1) core. (one worker core was okay.)
When I decrease the TXA_FLUSH_THRESHOLD value(1024 to 32), it was okay

It's TXA_FLUSH_THRESHOLD at lib/librte_eventdev/rte_event_eth_tx_adapter.c. // https://git.dpdk.org/dpdk-stable/tree/lib/librte_eventdev/rte_event_eth_tx_adapter.c?h=20.11#n15<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-dff86f66a37f4e0e&q=1&e=9d9b2326-a7ae-405b-bc88-9a28d777c306&u=https%3A%2F%2Fgit.dpdk.org%2Fdpdk-stable%2Ftree%2Flib%2Flibrte_eventdev%2Frte_event_eth_tx_adapter.c%3Fh%3D20.11%23n15>
When TXA_FLUSH_THRESHOLD value was changed from 1024 to 32, the latency test result was fine on 10 cores for low traffic(DL:20Mbps/UL:17kbps).
I think that this can make call for rte_eth_tx_buffer_flush() more frequently.
But, I'm not sure whether this approach can cause worse performance or not.
Do you have any opinion about this?

[Jay] Yes, it will cause rte_eth_tx_buffer_flush() to be called more often. It can lead to lesser batching benefit. Typical performance vs. latency trade-off decision apply here.


Similar RDK RTE_BRIDGE_ETH_TX_FLUSH_THOLD is patched on DUSG3 from 1024 to smaller value since DPDK 18.11.2:
I'm not aware of any side-effect, I think it is needed to have low enough latency even at low traffic rates.  For more details see Intel FP 22288<https://footprints.intel.com/MRcgi/MRlogin.pl?DL=22288DA14>.

[Jay] Currently, TXA_FLUSH_THRESHOLD is not a configurable attr.

TXA_MAX_NB_TX(128) looks the same as CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE(16384), then is it also should be tuned?

[Jay] They both are different attributes. TXA_MAX_NB_TX refers to max number of queues in Tx adapter. CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE refers to event buffer size in Rx BD.


--- dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base.orig  2020-01-29 15:05:10.000000000 +0100
+++ dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base   2020-01-29 15:11:10.000000000 +0100
@@ -566,9 +566,9 @@
CONFIG_RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES=100
CONFIG_RTE_MAX_BRIDGE_ETH_INSTANCE=4
CONFIG_RTE_BRIDGE_ETH_INTR_RING_SIZE=32
-CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=128
+CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=16384
CONFIG_RTE_LIBRTE_BRIDGE_ETH_DEBUG=n
-CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=1024
+CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=32

--- dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-05 23:46:52.051051000 +0200
+++ dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h   2021-08-06 00:50:07.310766255 +0200
@@ -175,8 +175,8 @@
#define RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES 100
#define RTE_MAX_BRIDGE_ETH_INSTANCE 4
#define RTE_BRIDGE_ETH_INTR_RING_SIZE 32
-#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 128
-#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 1024
+#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 16384
+#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 10
#undef RTE_BRIDGE_ETH_TX_MULTI_PKT_EVENT


BR/Jaeeun


[-- Attachment #2: Type: text/html, Size: 27274 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-06-27  6:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <DB7PR07MB4489DB019CC14EBE5D4AC472F3AB9@DB7PR07MB4489.eurprd07.prod.outlook.com>
2022-06-16  6:39 ` ask for TXA_FLUSH_THRESHOLD change Jayatheerthan, Jay
2022-06-22  2:13   ` Jaeeun Ham
2022-06-25  3:19     ` Jaeeun Ham
2022-06-27  6:23       ` Jayatheerthan, Jay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).