Hi, From DPDK 21.11, the Rx adapter event buffer size is configurable through API during creation time. Please refer to rte_event_eth_rx_adapter_create_with_params() API in the below link: https://doc.dpdk.org/api/rte__event__eth__rx__adapter_8h.html You could refer to EAL docs on how to use log-level option based on the PMD you are using. -Jay From: Jaeeun Ham Sent: Saturday, June 25, 2022 8:49 AM To: Jayatheerthan, Jay ; dev@dpdk.org Cc: Jerin Jacob Subject: RE: ask for TXA_FLUSH_THRESHOLD change Hi, Packets seem to be silently discarded in the RX NIC while waiting in the buffer, and it does not happen when I use only one worker core. I think multiple worker cores packet processing would have this problem to handle heavy traffic in parallel. I want to increase buffer size to solve this as below, then is it right to change only modify the number from 4 to 8? I am trying to gather DPDK and PMD driver log such as "--log-level="7" --log-level="pmd.net,7". Could you recommend how to set the log-level string to confirm silent packet drop? dpdk-stable-20.11.1/lib/librte_eventdev/rte_event_eth_rx_adapter.c #define BATCH_SIZE 32 #define BLOCK_CNT_THRESHOLD 10 #define ETH_EVENT_BUFFER_SIZE (4*BATCH_SIZE) // e.g. (8*BATCH_SIZE) #define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32 #define ETH_RX_ADAPTER_MEM_NAME_LEN 32 #define RSS_KEY_SIZE 40 /* value written to intr thread pipe to signal thread exit */ #define ETH_BRIDGE_INTR_THREAD_EXIT 1 /* Sentinel value to detect initialized file handle */ #define INIT_FD -1 BR/Jaeeun From: Jaeeun Ham Sent: Wednesday, June 22, 2022 11:14 AM To: Jayatheerthan, Jay >; dev@dpdk.org Cc: Jerin Jacob > Subject: RE: ask for TXA_FLUSH_THRESHOLD change Hi, Could you guide me on how to eliminate or reduce tx drop/retry? TXA_FLUSH_THRESHOLD helped somewhat but it was not cleared packet tx drop. ========[ TX adapter stats ]======== tx_retry: 17499893 tx_packets: 7501716 tx_dropped: 5132458 BR/Jaeeun From: Jayatheerthan, Jay > Sent: Thursday, June 16, 2022 3:40 PM To: Jaeeun Ham >; dev@dpdk.org Cc: Jerin Jacob > Subject: RE: ask for TXA_FLUSH_THRESHOLD change Hi Jaeeun, See my responses inline below. -Jay From: Jaeeun Ham > Sent: Monday, June 13, 2022 5:51 AM To: dev@dpdk.org Cc: Jerin Jacob >; Jayatheerthan, Jay > Subject: ask for TXA_FLUSH_THRESHOLD change Hi, There were latency delay when I increase dpdk(20.11.1) core. (one worker core was okay.) When I decrease the TXA_FLUSH_THRESHOLD value(1024 to 32), it was okay It's TXA_FLUSH_THRESHOLD at lib/librte_eventdev/rte_event_eth_tx_adapter.c. // https://git.dpdk.org/dpdk-stable/tree/lib/librte_eventdev/rte_event_eth_tx_adapter.c?h=20.11#n15 When TXA_FLUSH_THRESHOLD value was changed from 1024 to 32, the latency test result was fine on 10 cores for low traffic(DL:20Mbps/UL:17kbps). I think that this can make call for rte_eth_tx_buffer_flush() more frequently. But, I'm not sure whether this approach can cause worse performance or not. Do you have any opinion about this? [Jay] Yes, it will cause rte_eth_tx_buffer_flush() to be called more often. It can lead to lesser batching benefit. Typical performance vs. latency trade-off decision apply here. Similar RDK RTE_BRIDGE_ETH_TX_FLUSH_THOLD is patched on DUSG3 from 1024 to smaller value since DPDK 18.11.2: I'm not aware of any side-effect, I think it is needed to have low enough latency even at low traffic rates. For more details see Intel FP 22288. [Jay] Currently, TXA_FLUSH_THRESHOLD is not a configurable attr. TXA_MAX_NB_TX(128) looks the same as CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE(16384), then is it also should be tuned? [Jay] They both are different attributes. TXA_MAX_NB_TX refers to max number of queues in Tx adapter. CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE refers to event buffer size in Rx BD. --- dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base.orig 2020-01-29 15:05:10.000000000 +0100 +++ dpdk-3pp-swu-18.11/dpdk-stable-18.11.2/config/common_base 2020-01-29 15:11:10.000000000 +0100 @@ -566,9 +566,9 @@ CONFIG_RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES=100 CONFIG_RTE_MAX_BRIDGE_ETH_INSTANCE=4 CONFIG_RTE_BRIDGE_ETH_INTR_RING_SIZE=32 -CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=128 +CONFIG_RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE=16384 CONFIG_RTE_LIBRTE_BRIDGE_ETH_DEBUG=n -CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=1024 +CONFIG_RTE_BRIDGE_ETH_TX_FLUSH_THOLD=32 --- dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h 2021-08-05 23:46:52.051051000 +0200 +++ dpdk-3pp-swu-dusg3-20.11.3/dpdk-stable-20.11.3/config/rte_config.h 2021-08-06 00:50:07.310766255 +0200 @@ -175,8 +175,8 @@ #define RTE_LIBRTE_BRIDGE_ETH_MAX_CP_ENQ_RETRIES 100 #define RTE_MAX_BRIDGE_ETH_INSTANCE 4 #define RTE_BRIDGE_ETH_INTR_RING_SIZE 32 -#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 128 -#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 1024 +#define RTE_BRIDGE_ETH_EVENT_BUFFER_SIZE 16384 +#define RTE_BRIDGE_ETH_TX_FLUSH_THOLD 10 #undef RTE_BRIDGE_ETH_TX_MULTI_PKT_EVENT BR/Jaeeun