* [PATCH 0/4] bugfix and enhance features for DMA example
@ 2022-04-11 2:56 Chengwen Feng
2022-04-11 2:56 ` [PATCH 1/4] examples/dma: fix MTU configuration Chengwen Feng
` (5 more replies)
0 siblings, 6 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 2:56 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
This patch set contains four bugfix and enhanced feature patches.
Chengwen Feng (3):
examples/dma: fix Tx drop statistic is not collected
examples/dma: support enqueue drop statistic
examples/dma: add minimal copy size parameter
Huisong Li (1):
examples/dma: fix MTU configuration
examples/dma/dmafwd.c | 111 +++++++++++++++++++++++++++++-------------
1 file changed, 77 insertions(+), 34 deletions(-)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 1/4] examples/dma: fix MTU configuration
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
@ 2022-04-11 2:56 ` Chengwen Feng
2022-04-11 2:56 ` [PATCH 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
` (4 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 2:56 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
From: Huisong Li <lihuisong@huawei.com>
The MTU in dma App can be configured by 'max_frame_size' parameters which
have a default value(1518). It's not reasonable to use it directly as MTU.
This patch fix it.
Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
examples/dma/dmafwd.c | 43 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 39 insertions(+), 4 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index 608487e35c..a03ca05129 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -117,7 +117,7 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE;
static volatile bool force_quit;
static uint32_t dma_batch_sz = MAX_PKT_BURST;
-static uint32_t max_frame_size = RTE_ETHER_MAX_LEN;
+static uint32_t max_frame_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
@@ -851,6 +851,38 @@ assign_rings(void)
}
/* >8 End of assigning ring structures for packet exchanging. */
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_frame_size == 0)
+ return 0;
+
+ if (max_frame_size < RTE_ETHER_MIN_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_frame_size - overhead_len;
+
+ return 0;
+}
+
/*
* Initializes a given port using global settings and with the RX buffers
* coming from the mbuf_pool passed as a parameter.
@@ -878,9 +910,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
struct rte_eth_dev_info dev_info;
int ret, i;
- if (max_frame_size > local_port_conf.rxmode.mtu)
- local_port_conf.rxmode.mtu = max_frame_size;
-
/* Skip ports that are not enabled */
if ((dma_enabled_port_mask & (1 << portid)) == 0) {
printf("Skipping disabled port %u\n", portid);
@@ -895,6 +924,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
rte_exit(EXIT_FAILURE, "Cannot get device info: %s, port=%u\n",
rte_strerror(-ret), portid);
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max frame size: %u (port %u)\n",
+ max_frame_size, portid);
+
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 2/4] examples/dma: fix Tx drop statistic is not collected
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-11 2:56 ` [PATCH 1/4] examples/dma: fix MTU configuration Chengwen Feng
@ 2022-04-11 2:56 ` Chengwen Feng
2022-04-11 2:56 ` [PATCH 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
` (3 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 2:56 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
The Tx drop statistic was designed to collected by
rte_eth_dev_tx_buffer mechanism, but the application uses
rte_eth_tx_burst to send packets and this lead the Tx drop statistic
was not collected.
This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem.
Fixes: 632bcd9b5d4f ("examples/ioat: print statistics")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
examples/dma/dmafwd.c | 27 +++++----------------------
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index a03ca05129..dd576bcf77 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -122,7 +122,6 @@ static uint32_t max_frame_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
-static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
struct rte_mempool *dma_pktmbuf_pool;
/* Print out statistics for one port. */
@@ -484,10 +483,13 @@ dma_tx_port(struct rxtx_port_config *tx_config)
port_statistics.tx[tx_config->rxtx_port] += nb_tx;
- /* Free any unsent packets. */
- if (unlikely(nb_tx < nb_dq))
+ if (unlikely(nb_tx < nb_dq)) {
+ port_statistics.tx_dropped[tx_config->rxtx_port] +=
+ (nb_dq - nb_tx);
+ /* Free any unsent packets. */
rte_mempool_put_bulk(dma_pktmbuf_pool,
(void *)&mbufs[nb_tx], nb_dq - nb_tx);
+ }
}
}
/* >8 End of transmitting packets from dmadev. */
@@ -970,25 +972,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
"rte_eth_tx_queue_setup:err=%d,port=%u\n",
ret, portid);
- /* Initialize TX buffers */
- tx_buffer[portid] = rte_zmalloc_socket("tx_buffer",
- RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
- rte_eth_dev_socket_id(portid));
- if (tx_buffer[portid] == NULL)
- rte_exit(EXIT_FAILURE,
- "Cannot allocate buffer for tx on port %u\n",
- portid);
-
- rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST);
-
- ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid],
- rte_eth_tx_buffer_count_callback,
- &port_statistics.tx_dropped[portid]);
- if (ret < 0)
- rte_exit(EXIT_FAILURE,
- "Cannot set error callback for tx buffer on port %u\n",
- portid);
-
/* Start device. 8< */
ret = rte_eth_dev_start(portid);
if (ret < 0)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 3/4] examples/dma: support enqueue drop statistic
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-11 2:56 ` [PATCH 1/4] examples/dma: fix MTU configuration Chengwen Feng
2022-04-11 2:56 ` [PATCH 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
@ 2022-04-11 2:56 ` Chengwen Feng
2022-04-11 2:56 ` [PATCH 4/4] examples/dma: add minimal copy size parameter Chengwen Feng
` (2 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 2:56 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
The copy drop statistic counted in two scenarios: DMA copy failures and
enqueue failures. so it is difficult to locate the problem.
This patch adds enqueue drop statistic to fix the it.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
examples/dma/dmafwd.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index dd576bcf77..6b1b777cb8 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -58,6 +58,7 @@ struct dma_port_statistics {
uint64_t tx[RTE_MAX_ETHPORTS];
uint64_t tx_dropped[RTE_MAX_ETHPORTS];
uint64_t copy_dropped[RTE_MAX_ETHPORTS];
+ uint64_t enqueue_dropped[RTE_MAX_ETHPORTS];
};
struct dma_port_statistics port_statistics;
struct total_statistics {
@@ -132,12 +133,14 @@ print_port_stats(uint16_t port_id)
"\nPackets sent: %34"PRIu64
"\nPackets received: %30"PRIu64
"\nPackets dropped on tx: %25"PRIu64
- "\nPackets dropped on copy: %23"PRIu64,
+ "\nPackets dropped on copy: %23"PRIu64
+ "\nPackets dropped on enqueue: %20"PRIu64,
port_id,
port_statistics.tx[port_id],
port_statistics.rx[port_id],
port_statistics.tx_dropped[port_id],
- port_statistics.copy_dropped[port_id]);
+ port_statistics.copy_dropped[port_id],
+ port_statistics.enqueue_dropped[port_id]);
}
/* Print out statistics for one dmadev device. */
@@ -227,8 +230,9 @@ print_stats(char *prgname)
print_port_stats(port_id);
delta_ts.total_packets_dropped +=
- port_statistics.tx_dropped[port_id]
- + port_statistics.copy_dropped[port_id];
+ port_statistics.tx_dropped[port_id] +
+ port_statistics.copy_dropped[port_id] +
+ port_statistics.enqueue_dropped[port_id];
delta_ts.total_packets_tx +=
port_statistics.tx[port_id];
delta_ts.total_packets_rx +=
@@ -450,7 +454,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
(void *)&pkts_burst_copy[nb_enq],
nb_rx - nb_enq);
- port_statistics.copy_dropped[rx_config->rxtx_port] +=
+ port_statistics.enqueue_dropped[rx_config->rxtx_port] +=
(nb_rx - nb_enq);
}
}
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH 4/4] examples/dma: add minimal copy size parameter
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
` (2 preceding siblings ...)
2022-04-11 2:56 ` [PATCH 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
@ 2022-04-11 2:56 ` Chengwen Feng
2022-04-11 9:27 ` Bruce Richardson
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
5 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 2:56 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
This patch adds minimal copy size parameter(-m/--min-copy-size), so
when do copy by CPU or DMA, the real copy size will be the maximum of
mbuf's data_len and this parameter.
This parameter was designed to compare the performance between CPU copy
and DMA copy. User could send small packets with a high rate to drive
the performance test.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
examples/dma/dmafwd.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index 6b1b777cb8..83094ba378 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -25,6 +25,7 @@
#define CMD_LINE_OPT_RING_SIZE "ring-size"
#define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size"
#define CMD_LINE_OPT_FRAME_SIZE "max-frame-size"
+#define CMD_LINE_OPT_COPY_SIZE "min-copy-size"
#define CMD_LINE_OPT_STATS_INTERVAL "stats-interval"
/* configurable number of RX/TX ring descriptors */
@@ -119,6 +120,7 @@ static volatile bool force_quit;
static uint32_t dma_batch_sz = MAX_PKT_BURST;
static uint32_t max_frame_size;
+static uint32_t min_copy_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
@@ -208,7 +210,12 @@ print_stats(char *prgname)
"Rx Queues = %d, ", nb_queues);
status_strlen += snprintf(status_string + status_strlen,
sizeof(status_string) - status_strlen,
- "Ring Size = %d", ring_size);
+ "Ring Size = %d\n", ring_size);
+ status_strlen += snprintf(status_string + status_strlen,
+ sizeof(status_string) - status_strlen,
+ "Min Copy Size = %u Packet Data Room Size = %u",
+ min_copy_size, rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
+ RTE_PKTMBUF_HEADROOM);
memset(&ts, 0, sizeof(struct total_statistics));
@@ -307,7 +314,8 @@ static inline void
pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst)
{
rte_memcpy(rte_pktmbuf_mtod(dst, char *),
- rte_pktmbuf_mtod(src, char *), src->data_len);
+ rte_pktmbuf_mtod(src, char *),
+ RTE_MAX(src->data_len, min_copy_size));
}
/* >8 End of perform packet copy there is a user-defined function. */
@@ -324,7 +332,8 @@ dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[],
ret = rte_dma_copy(dev_id, 0,
rte_pktmbuf_iova(pkts[i]),
rte_pktmbuf_iova(pkts_copy[i]),
- rte_pktmbuf_data_len(pkts[i]), 0);
+ RTE_MAX(rte_pktmbuf_data_len(pkts[i]), min_copy_size),
+ 0);
if (ret < 0)
break;
@@ -576,6 +585,7 @@ dma_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
" -b --dma-batch-size: number of requests per DMA batch\n"
" -f --max-frame-size: max frame size\n"
+ " -m --min-copy-size: minimum copy length\n"
" -p --portmask: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per port (default is 1)\n"
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
@@ -621,6 +631,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
"b:" /* dma batch size */
"c:" /* copy type (sw|hw) */
"f:" /* max frame size */
+ "m:" /* min copy size */
"p:" /* portmask */
"q:" /* number of RX queues per port */
"s:" /* ring size */
@@ -636,6 +647,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
{CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'},
{CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'},
{CMD_LINE_OPT_FRAME_SIZE, required_argument, NULL, 'f'},
+ {CMD_LINE_OPT_COPY_SIZE, required_argument, NULL, 'm'},
{CMD_LINE_OPT_STATS_INTERVAL, required_argument, NULL, 'i'},
{NULL, 0, 0, 0}
};
@@ -670,6 +682,10 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
}
break;
+ case 'm':
+ min_copy_size = atoi(optarg);
+ break;
+
/* portmask */
case 'p':
dma_enabled_port_mask = dma_parse_portmask(optarg);
@@ -1068,6 +1084,11 @@ main(int argc, char **argv)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
/* >8 End of allocates mempool to hold the mbufs. */
+ if (min_copy_size >
+ (uint32_t)(rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
+ RTE_PKTMBUF_HEADROOM))
+ rte_exit(EXIT_FAILURE, "Min copy size > packet mbuf size\n");
+
/* Initialize each port. 8< */
cfg.nb_ports = 0;
RTE_ETH_FOREACH_DEV(portid)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 4/4] examples/dma: add minimal copy size parameter
2022-04-11 2:56 ` [PATCH 4/4] examples/dma: add minimal copy size parameter Chengwen Feng
@ 2022-04-11 9:27 ` Bruce Richardson
2022-04-11 12:23 ` fengchengwen
0 siblings, 1 reply; 27+ messages in thread
From: Bruce Richardson @ 2022-04-11 9:27 UTC (permalink / raw)
To: Chengwen Feng; +Cc: thomas, kevin.laatz, dev
On Mon, Apr 11, 2022 at 10:56:34AM +0800, Chengwen Feng wrote:
> This patch adds minimal copy size parameter(-m/--min-copy-size), so
> when do copy by CPU or DMA, the real copy size will be the maximum of
> mbuf's data_len and this parameter.
>
> This parameter was designed to compare the performance between CPU copy
> and DMA copy. User could send small packets with a high rate to drive
> the performance test.
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Hi,
few comments inline below.
/Bruce
> ---
> examples/dma/dmafwd.c | 27 ++++++++++++++++++++++++---
> 1 file changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
> index 6b1b777cb8..83094ba378 100644
> --- a/examples/dma/dmafwd.c
> +++ b/examples/dma/dmafwd.c
> @@ -25,6 +25,7 @@
> #define CMD_LINE_OPT_RING_SIZE "ring-size"
> #define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size"
> #define CMD_LINE_OPT_FRAME_SIZE "max-frame-size"
> +#define CMD_LINE_OPT_COPY_SIZE "min-copy-size"
While I'm not sure this strictly belongs in an example app to show use of
dmadev, I can see the value of it. However, I suggest we need to make it
clearer that it's not directly relevant to the normal use of the app. I
suggest making the parameter "force-min-copy-size" to make it clearer that
it's an explicit override.
> #define CMD_LINE_OPT_STATS_INTERVAL "stats-interval"
>
> /* configurable number of RX/TX ring descriptors */
> @@ -119,6 +120,7 @@ static volatile bool force_quit;
>
> static uint32_t dma_batch_sz = MAX_PKT_BURST;
> static uint32_t max_frame_size;
> +static uint32_t min_copy_size;
>
> /* ethernet addresses of ports */
> static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
> @@ -208,7 +210,12 @@ print_stats(char *prgname)
> "Rx Queues = %d, ", nb_queues);
> status_strlen += snprintf(status_string + status_strlen,
> sizeof(status_string) - status_strlen,
> - "Ring Size = %d", ring_size);
> + "Ring Size = %d\n", ring_size);
> + status_strlen += snprintf(status_string + status_strlen,
> + sizeof(status_string) - status_strlen,
> + "Min Copy Size = %u Packet Data Room Size = %u",
> + min_copy_size, rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
> + RTE_PKTMBUF_HEADROOM);
>
> memset(&ts, 0, sizeof(struct total_statistics));
>
> @@ -307,7 +314,8 @@ static inline void
> pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst)
> {
> rte_memcpy(rte_pktmbuf_mtod(dst, char *),
> - rte_pktmbuf_mtod(src, char *), src->data_len);
> + rte_pktmbuf_mtod(src, char *),
> + RTE_MAX(src->data_len, min_copy_size));
> }
> /* >8 End of perform packet copy there is a user-defined function. */
>
> @@ -324,7 +332,8 @@ dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[],
> ret = rte_dma_copy(dev_id, 0,
> rte_pktmbuf_iova(pkts[i]),
> rte_pktmbuf_iova(pkts_copy[i]),
> - rte_pktmbuf_data_len(pkts[i]), 0);
> + RTE_MAX(rte_pktmbuf_data_len(pkts[i]), min_copy_size),
> + 0);
>
> if (ret < 0)
> break;
> @@ -576,6 +585,7 @@ dma_usage(const char *prgname)
> printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
> " -b --dma-batch-size: number of requests per DMA batch\n"
> " -f --max-frame-size: max frame size\n"
> + " -m --min-copy-size: minimum copy length\n"
The help text needs to be expanded, again to make clear that this is for
perf comparison and the like. Something like "Force a minimum copy length,
even for smaller packets"
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 0/4] bugfix and enhance features for DMA example
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
` (3 preceding siblings ...)
2022-04-11 2:56 ` [PATCH 4/4] examples/dma: add minimal copy size parameter Chengwen Feng
@ 2022-04-11 12:14 ` Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 1/4] examples/dma: fix MTU configuration Chengwen Feng
` (3 more replies)
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
5 siblings, 4 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 12:14 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
This patch set contains four bugfix and enhanced feature patches.
Chengwen Feng (3):
examples/dma: fix Tx drop statistic is not collected
examples/dma: support enqueue drop statistic
examples/dma: add force minimal copy size parameter
Huisong Li (1):
examples/dma: fix MTU configuration
---
v2:
* add "force" prefix for the minimal copy size parameter to make it
clearer to user.
examples/dma/dmafwd.c | 114 +++++++++++++++++++++++++++++-------------
1 file changed, 80 insertions(+), 34 deletions(-)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 1/4] examples/dma: fix MTU configuration
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
@ 2022-04-11 12:14 ` Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
` (2 subsequent siblings)
3 siblings, 0 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 12:14 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
From: Huisong Li <lihuisong@huawei.com>
The MTU in dma App can be configured by 'max_frame_size' parameters which
have a default value(1518). It's not reasonable to use it directly as MTU.
This patch fix it.
Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
examples/dma/dmafwd.c | 43 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 39 insertions(+), 4 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index 608487e35c..a03ca05129 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -117,7 +117,7 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE;
static volatile bool force_quit;
static uint32_t dma_batch_sz = MAX_PKT_BURST;
-static uint32_t max_frame_size = RTE_ETHER_MAX_LEN;
+static uint32_t max_frame_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
@@ -851,6 +851,38 @@ assign_rings(void)
}
/* >8 End of assigning ring structures for packet exchanging. */
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_frame_size == 0)
+ return 0;
+
+ if (max_frame_size < RTE_ETHER_MIN_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_frame_size - overhead_len;
+
+ return 0;
+}
+
/*
* Initializes a given port using global settings and with the RX buffers
* coming from the mbuf_pool passed as a parameter.
@@ -878,9 +910,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
struct rte_eth_dev_info dev_info;
int ret, i;
- if (max_frame_size > local_port_conf.rxmode.mtu)
- local_port_conf.rxmode.mtu = max_frame_size;
-
/* Skip ports that are not enabled */
if ((dma_enabled_port_mask & (1 << portid)) == 0) {
printf("Skipping disabled port %u\n", portid);
@@ -895,6 +924,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
rte_exit(EXIT_FAILURE, "Cannot get device info: %s, port=%u\n",
rte_strerror(-ret), portid);
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max frame size: %u (port %u)\n",
+ max_frame_size, portid);
+
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 2/4] examples/dma: fix Tx drop statistic is not collected
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 1/4] examples/dma: fix MTU configuration Chengwen Feng
@ 2022-04-11 12:14 ` Chengwen Feng
2022-04-13 14:57 ` Bruce Richardson
2022-04-11 12:14 ` [PATCH v2 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 4/4] examples/dma: add force minimal copy size parameter Chengwen Feng
3 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 12:14 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
The Tx drop statistic was designed to collected by
rte_eth_dev_tx_buffer mechanism, but the application uses
rte_eth_tx_burst to send packets and this lead the Tx drop statistic
was not collected.
This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem.
Fixes: 632bcd9b5d4f ("examples/ioat: print statistics")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
examples/dma/dmafwd.c | 27 +++++----------------------
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index a03ca05129..dd576bcf77 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -122,7 +122,6 @@ static uint32_t max_frame_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
-static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
struct rte_mempool *dma_pktmbuf_pool;
/* Print out statistics for one port. */
@@ -484,10 +483,13 @@ dma_tx_port(struct rxtx_port_config *tx_config)
port_statistics.tx[tx_config->rxtx_port] += nb_tx;
- /* Free any unsent packets. */
- if (unlikely(nb_tx < nb_dq))
+ if (unlikely(nb_tx < nb_dq)) {
+ port_statistics.tx_dropped[tx_config->rxtx_port] +=
+ (nb_dq - nb_tx);
+ /* Free any unsent packets. */
rte_mempool_put_bulk(dma_pktmbuf_pool,
(void *)&mbufs[nb_tx], nb_dq - nb_tx);
+ }
}
}
/* >8 End of transmitting packets from dmadev. */
@@ -970,25 +972,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
"rte_eth_tx_queue_setup:err=%d,port=%u\n",
ret, portid);
- /* Initialize TX buffers */
- tx_buffer[portid] = rte_zmalloc_socket("tx_buffer",
- RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
- rte_eth_dev_socket_id(portid));
- if (tx_buffer[portid] == NULL)
- rte_exit(EXIT_FAILURE,
- "Cannot allocate buffer for tx on port %u\n",
- portid);
-
- rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST);
-
- ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid],
- rte_eth_tx_buffer_count_callback,
- &port_statistics.tx_dropped[portid]);
- if (ret < 0)
- rte_exit(EXIT_FAILURE,
- "Cannot set error callback for tx buffer on port %u\n",
- portid);
-
/* Start device. 8< */
ret = rte_eth_dev_start(portid);
if (ret < 0)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 3/4] examples/dma: support enqueue drop statistic
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 1/4] examples/dma: fix MTU configuration Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
@ 2022-04-11 12:14 ` Chengwen Feng
2022-04-13 15:01 ` Bruce Richardson
2022-04-11 12:14 ` [PATCH v2 4/4] examples/dma: add force minimal copy size parameter Chengwen Feng
3 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 12:14 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
The copy drop statistic counted in two scenarios: DMA copy failures and
enqueue failures. so it is difficult to locate the problem.
This patch adds enqueue drop statistic to fix the it.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
examples/dma/dmafwd.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index dd576bcf77..6b1b777cb8 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -58,6 +58,7 @@ struct dma_port_statistics {
uint64_t tx[RTE_MAX_ETHPORTS];
uint64_t tx_dropped[RTE_MAX_ETHPORTS];
uint64_t copy_dropped[RTE_MAX_ETHPORTS];
+ uint64_t enqueue_dropped[RTE_MAX_ETHPORTS];
};
struct dma_port_statistics port_statistics;
struct total_statistics {
@@ -132,12 +133,14 @@ print_port_stats(uint16_t port_id)
"\nPackets sent: %34"PRIu64
"\nPackets received: %30"PRIu64
"\nPackets dropped on tx: %25"PRIu64
- "\nPackets dropped on copy: %23"PRIu64,
+ "\nPackets dropped on copy: %23"PRIu64
+ "\nPackets dropped on enqueue: %20"PRIu64,
port_id,
port_statistics.tx[port_id],
port_statistics.rx[port_id],
port_statistics.tx_dropped[port_id],
- port_statistics.copy_dropped[port_id]);
+ port_statistics.copy_dropped[port_id],
+ port_statistics.enqueue_dropped[port_id]);
}
/* Print out statistics for one dmadev device. */
@@ -227,8 +230,9 @@ print_stats(char *prgname)
print_port_stats(port_id);
delta_ts.total_packets_dropped +=
- port_statistics.tx_dropped[port_id]
- + port_statistics.copy_dropped[port_id];
+ port_statistics.tx_dropped[port_id] +
+ port_statistics.copy_dropped[port_id] +
+ port_statistics.enqueue_dropped[port_id];
delta_ts.total_packets_tx +=
port_statistics.tx[port_id];
delta_ts.total_packets_rx +=
@@ -450,7 +454,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
(void *)&pkts_burst_copy[nb_enq],
nb_rx - nb_enq);
- port_statistics.copy_dropped[rx_config->rxtx_port] +=
+ port_statistics.enqueue_dropped[rx_config->rxtx_port] +=
(nb_rx - nb_enq);
}
}
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2 4/4] examples/dma: add force minimal copy size parameter
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
` (2 preceding siblings ...)
2022-04-11 12:14 ` [PATCH v2 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
@ 2022-04-11 12:14 ` Chengwen Feng
2022-04-13 15:04 ` Bruce Richardson
3 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-11 12:14 UTC (permalink / raw)
To: thomas, kevin.laatz, bruce.richardson; +Cc: dev
This patch adds force minimal copy size parameter
(-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy
size will be the maximum of mbuf's data_len and this parameter.
This parameter was designed to compare the performance between CPU copy
and DMA copy. User could send small packets with a high rate to drive
the performance test.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
examples/dma/dmafwd.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index 6b1b777cb8..3bd86b98c1 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -25,6 +25,7 @@
#define CMD_LINE_OPT_RING_SIZE "ring-size"
#define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size"
#define CMD_LINE_OPT_FRAME_SIZE "max-frame-size"
+#define CMD_LINE_OPT_FORCE_COPY_SIZE "force-min-copy-size"
#define CMD_LINE_OPT_STATS_INTERVAL "stats-interval"
/* configurable number of RX/TX ring descriptors */
@@ -119,6 +120,7 @@ static volatile bool force_quit;
static uint32_t dma_batch_sz = MAX_PKT_BURST;
static uint32_t max_frame_size;
+static uint32_t force_min_copy_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
@@ -208,7 +210,13 @@ print_stats(char *prgname)
"Rx Queues = %d, ", nb_queues);
status_strlen += snprintf(status_string + status_strlen,
sizeof(status_string) - status_strlen,
- "Ring Size = %d", ring_size);
+ "Ring Size = %d\n", ring_size);
+ status_strlen += snprintf(status_string + status_strlen,
+ sizeof(status_string) - status_strlen,
+ "Force Min Copy Size = %u Packet Data Room Size = %u",
+ force_min_copy_size,
+ rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
+ RTE_PKTMBUF_HEADROOM);
memset(&ts, 0, sizeof(struct total_statistics));
@@ -307,7 +315,8 @@ static inline void
pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst)
{
rte_memcpy(rte_pktmbuf_mtod(dst, char *),
- rte_pktmbuf_mtod(src, char *), src->data_len);
+ rte_pktmbuf_mtod(src, char *),
+ RTE_MAX(src->data_len, force_min_copy_size));
}
/* >8 End of perform packet copy there is a user-defined function. */
@@ -324,7 +333,9 @@ dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[],
ret = rte_dma_copy(dev_id, 0,
rte_pktmbuf_iova(pkts[i]),
rte_pktmbuf_iova(pkts_copy[i]),
- rte_pktmbuf_data_len(pkts[i]), 0);
+ RTE_MAX(rte_pktmbuf_data_len(pkts[i]),
+ force_min_copy_size),
+ 0);
if (ret < 0)
break;
@@ -576,6 +587,7 @@ dma_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
" -b --dma-batch-size: number of requests per DMA batch\n"
" -f --max-frame-size: max frame size\n"
+ " -m --force-min-copy-size: force a minimum copy length, even for smaller packets\n"
" -p --portmask: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per port (default is 1)\n"
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
@@ -621,6 +633,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
"b:" /* dma batch size */
"c:" /* copy type (sw|hw) */
"f:" /* max frame size */
+ "m:" /* force min copy size */
"p:" /* portmask */
"q:" /* number of RX queues per port */
"s:" /* ring size */
@@ -636,6 +649,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
{CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'},
{CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'},
{CMD_LINE_OPT_FRAME_SIZE, required_argument, NULL, 'f'},
+ {CMD_LINE_OPT_FORCE_COPY_SIZE, required_argument, NULL, 'm'},
{CMD_LINE_OPT_STATS_INTERVAL, required_argument, NULL, 'i'},
{NULL, 0, 0, 0}
};
@@ -670,6 +684,10 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
}
break;
+ case 'm':
+ force_min_copy_size = atoi(optarg);
+ break;
+
/* portmask */
case 'p':
dma_enabled_port_mask = dma_parse_portmask(optarg);
@@ -1068,6 +1086,12 @@ main(int argc, char **argv)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
/* >8 End of allocates mempool to hold the mbufs. */
+ if (force_min_copy_size >
+ (uint32_t)(rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
+ RTE_PKTMBUF_HEADROOM))
+ rte_exit(EXIT_FAILURE,
+ "Force min copy size > packet mbuf size\n");
+
/* Initialize each port. 8< */
cfg.nb_ports = 0;
RTE_ETH_FOREACH_DEV(portid)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 4/4] examples/dma: add minimal copy size parameter
2022-04-11 9:27 ` Bruce Richardson
@ 2022-04-11 12:23 ` fengchengwen
0 siblings, 0 replies; 27+ messages in thread
From: fengchengwen @ 2022-04-11 12:23 UTC (permalink / raw)
To: Bruce Richardson; +Cc: thomas, kevin.laatz, dev
Hi Bruce, already fix in v2, Thanks.
On 2022/4/11 17:27, Bruce Richardson wrote:
> On Mon, Apr 11, 2022 at 10:56:34AM +0800, Chengwen Feng wrote:
>> This patch adds minimal copy size parameter(-m/--min-copy-size), so
>> when do copy by CPU or DMA, the real copy size will be the maximum of
>> mbuf's data_len and this parameter.
>>
>> This parameter was designed to compare the performance between CPU copy
>> and DMA copy. User could send small packets with a high rate to drive
>> the performance test.
>>
>> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
>
> Hi,
>
> few comments inline below.
>
> /Bruce
>
>> ---
>> examples/dma/dmafwd.c | 27 ++++++++++++++++++++++++---
>> 1 file changed, 24 insertions(+), 3 deletions(-)
>>
>> diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
>> index 6b1b777cb8..83094ba378 100644
>> --- a/examples/dma/dmafwd.c
>> +++ b/examples/dma/dmafwd.c
>> @@ -25,6 +25,7 @@
>> #define CMD_LINE_OPT_RING_SIZE "ring-size"
>> #define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size"
>> #define CMD_LINE_OPT_FRAME_SIZE "max-frame-size"
>> +#define CMD_LINE_OPT_COPY_SIZE "min-copy-size"
>
> While I'm not sure this strictly belongs in an example app to show use of
> dmadev, I can see the value of it. However, I suggest we need to make it
> clearer that it's not directly relevant to the normal use of the app. I
> suggest making the parameter "force-min-copy-size" to make it clearer that
> it's an explicit override.
>
>> #define CMD_LINE_OPT_STATS_INTERVAL "stats-interval"
>>
>> /* configurable number of RX/TX ring descriptors */
>> @@ -119,6 +120,7 @@ static volatile bool force_quit;
>>
>> static uint32_t dma_batch_sz = MAX_PKT_BURST;
>> static uint32_t max_frame_size;
...
>> @@ -576,6 +585,7 @@ dma_usage(const char *prgname)
>> printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
>> " -b --dma-batch-size: number of requests per DMA batch\n"
>> " -f --max-frame-size: max frame size\n"
>> + " -m --min-copy-size: minimum copy length\n"
>
> The help text needs to be expanded, again to make clear that this is for
> perf comparison and the like. Something like "Force a minimum copy length,
> even for smaller packets"
>
>
> .
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 2/4] examples/dma: fix Tx drop statistic is not collected
2022-04-11 12:14 ` [PATCH v2 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
@ 2022-04-13 14:57 ` Bruce Richardson
0 siblings, 0 replies; 27+ messages in thread
From: Bruce Richardson @ 2022-04-13 14:57 UTC (permalink / raw)
To: Chengwen Feng; +Cc: thomas, kevin.laatz, dev
On Mon, Apr 11, 2022 at 08:14:57PM +0800, Chengwen Feng wrote:
> The Tx drop statistic was designed to collected by
> rte_eth_dev_tx_buffer mechanism, but the application uses
> rte_eth_tx_burst to send packets and this lead the Tx drop statistic
> was not collected.
>
> This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem.
>
> Fixes: 632bcd9b5d4f ("examples/ioat: print statistics")
> Cc: stable@dpdk.org
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/4] examples/dma: support enqueue drop statistic
2022-04-11 12:14 ` [PATCH v2 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
@ 2022-04-13 15:01 ` Bruce Richardson
2022-04-16 6:19 ` fengchengwen
0 siblings, 1 reply; 27+ messages in thread
From: Bruce Richardson @ 2022-04-13 15:01 UTC (permalink / raw)
To: Chengwen Feng; +Cc: thomas, kevin.laatz, dev
On Mon, Apr 11, 2022 at 08:14:58PM +0800, Chengwen Feng wrote:
> The copy drop statistic counted in two scenarios: DMA copy failures and
> enqueue failures. so it is difficult to locate the problem.
>
Is the app actually tracking copy failures? From a quick glance at the code
it looks to me like the only "copy_failures" are the enqueue failures, in
which case the stat should just be renamed.
> This patch adds enqueue drop statistic to fix the it.
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
> examples/dma/dmafwd.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 4/4] examples/dma: add force minimal copy size parameter
2022-04-11 12:14 ` [PATCH v2 4/4] examples/dma: add force minimal copy size parameter Chengwen Feng
@ 2022-04-13 15:04 ` Bruce Richardson
0 siblings, 0 replies; 27+ messages in thread
From: Bruce Richardson @ 2022-04-13 15:04 UTC (permalink / raw)
To: Chengwen Feng; +Cc: thomas, kevin.laatz, dev
On Mon, Apr 11, 2022 at 08:14:59PM +0800, Chengwen Feng wrote:
> This patch adds force minimal copy size parameter
> (-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy
> size will be the maximum of mbuf's data_len and this parameter.
>
> This parameter was designed to compare the performance between CPU copy
> and DMA copy. User could send small packets with a high rate to drive
> the performance test.
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/4] examples/dma: support enqueue drop statistic
2022-04-13 15:01 ` Bruce Richardson
@ 2022-04-16 6:19 ` fengchengwen
2022-04-19 8:45 ` Bruce Richardson
0 siblings, 1 reply; 27+ messages in thread
From: fengchengwen @ 2022-04-16 6:19 UTC (permalink / raw)
To: Bruce Richardson; +Cc: thomas, kevin.laatz, dev
DMA is a memcopy engine, and from that perspective, I think it's appropriate
to use copy_failures when dma_enqueue fails.
The newly added enqueue_failures is mainly used for rte_ring_enqueue_burst
failures.
Since the app doesn't have a command line, I think adding this field can
immediately identify where it failed.
On 2022/4/13 23:01, Bruce Richardson wrote:
> On Mon, Apr 11, 2022 at 08:14:58PM +0800, Chengwen Feng wrote:
>> The copy drop statistic counted in two scenarios: DMA copy failures and
>> enqueue failures. so it is difficult to locate the problem.
>>
>
> Is the app actually tracking copy failures? From a quick glance at the code
> it looks to me like the only "copy_failures" are the enqueue failures, in
> which case the stat should just be renamed.
>
>> This patch adds enqueue drop statistic to fix the it.
>>
>> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
>> ---
>> examples/dma/dmafwd.c | 14 +++++++++-----
>> 1 file changed, 9 insertions(+), 5 deletions(-)
>>
>
> .
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/4] examples/dma: support enqueue drop statistic
2022-04-16 6:19 ` fengchengwen
@ 2022-04-19 8:45 ` Bruce Richardson
2022-04-19 12:09 ` fengchengwen
0 siblings, 1 reply; 27+ messages in thread
From: Bruce Richardson @ 2022-04-19 8:45 UTC (permalink / raw)
To: fengchengwen; +Cc: thomas, kevin.laatz, dev
On Sat, Apr 16, 2022 at 02:19:24PM +0800, fengchengwen wrote:
> DMA is a memcopy engine, and from that perspective, I think it's appropriate
> to use copy_failures when dma_enqueue fails.
>
> The newly added enqueue_failures is mainly used for rte_ring_enqueue_burst
> failures.
>
> Since the app doesn't have a command line, I think adding this field can
> immediately identify where it failed.
>
So one stat is for the HW path and the other is for the SW one? If that is
the case, only one stat should probably be printed out by the app depending
on the mode is it in.
> On 2022/4/13 23:01, Bruce Richardson wrote:
> > On Mon, Apr 11, 2022 at 08:14:58PM +0800, Chengwen Feng wrote:
> >> The copy drop statistic counted in two scenarios: DMA copy failures and
> >> enqueue failures. so it is difficult to locate the problem.
> >>
> >
> > Is the app actually tracking copy failures? From a quick glance at the code
> > it looks to me like the only "copy_failures" are the enqueue failures, in
> > which case the stat should just be renamed.
> >
> >> This patch adds enqueue drop statistic to fix the it.
> >>
> >> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> >> ---
> >> examples/dma/dmafwd.c | 14 +++++++++-----
> >> 1 file changed, 9 insertions(+), 5 deletions(-)
> >>
> >
> > .
> >
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/4] examples/dma: support enqueue drop statistic
2022-04-19 8:45 ` Bruce Richardson
@ 2022-04-19 12:09 ` fengchengwen
2022-04-24 3:55 ` fengchengwen
0 siblings, 1 reply; 27+ messages in thread
From: fengchengwen @ 2022-04-19 12:09 UTC (permalink / raw)
To: Bruce Richardson; +Cc: thomas, kevin.laatz, dev
On 2022/4/19 16:45, Bruce Richardson wrote:
> On Sat, Apr 16, 2022 at 02:19:24PM +0800, fengchengwen wrote:
>> DMA is a memcopy engine, and from that perspective, I think it's appropriate
>> to use copy_failures when dma_enqueue fails.
>>
>> The newly added enqueue_failures is mainly used for rte_ring_enqueue_burst
>> failures.
>>
>> Since the app doesn't have a command line, I think adding this field can
>> immediately identify where it failed.
>>
>
> So one stat is for the HW path and the other is for the SW one? If that is
> the case, only one stat should probably be printed out by the app depending
> on the mode is it in.
For HW path, both copy_failures and enqueue_failures
For SW path, only the enqueue_failure, PS: the value of copy_failures is fixed to be 0
>
>> On 2022/4/13 23:01, Bruce Richardson wrote:
>>> On Mon, Apr 11, 2022 at 08:14:58PM +0800, Chengwen Feng wrote:
>>>> The copy drop statistic counted in two scenarios: DMA copy failures and
>>>> enqueue failures. so it is difficult to locate the problem.
>>>>
>>>
>>> Is the app actually tracking copy failures? From a quick glance at the code
>>> it looks to me like the only "copy_failures" are the enqueue failures, in
>>> which case the stat should just be renamed.
>>>
>>>> This patch adds enqueue drop statistic to fix the it.
>>>>
>>>> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
>>>> ---
>>>> examples/dma/dmafwd.c | 14 +++++++++-----
>>>> 1 file changed, 9 insertions(+), 5 deletions(-)
>>>>
>>>
>>> .
>>>
>>
>
> .
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2 3/4] examples/dma: support enqueue drop statistic
2022-04-19 12:09 ` fengchengwen
@ 2022-04-24 3:55 ` fengchengwen
0 siblings, 0 replies; 27+ messages in thread
From: fengchengwen @ 2022-04-24 3:55 UTC (permalink / raw)
To: Bruce Richardson; +Cc: thomas, kevin.laatz, dev
After thinking about it, there are other ways (e.g. dmadump/telemetry) to know
which failed, so I think we could drop this patch.
I will send v3 for other patches.
Thanks.
On 2022/4/19 20:09, fengchengwen wrote:
> On 2022/4/19 16:45, Bruce Richardson wrote:
>> On Sat, Apr 16, 2022 at 02:19:24PM +0800, fengchengwen wrote:
>>> DMA is a memcopy engine, and from that perspective, I think it's appropriate
>>> to use copy_failures when dma_enqueue fails.
>>>
>>> The newly added enqueue_failures is mainly used for rte_ring_enqueue_burst
>>> failures.
>>>
>>> Since the app doesn't have a command line, I think adding this field can
>>> immediately identify where it failed.
>>>
>>
>> So one stat is for the HW path and the other is for the SW one? If that is
>> the case, only one stat should probably be printed out by the app depending
>> on the mode is it in.
>
> For HW path, both copy_failures and enqueue_failures
> For SW path, only the enqueue_failure, PS: the value of copy_failures is fixed to be 0
>
>>
>>> On 2022/4/13 23:01, Bruce Richardson wrote:
>>>> On Mon, Apr 11, 2022 at 08:14:58PM +0800, Chengwen Feng wrote:
>>>>> The copy drop statistic counted in two scenarios: DMA copy failures and
>>>>> enqueue failures. so it is difficult to locate the problem.
>>>>>
>>>>
>>>> Is the app actually tracking copy failures? From a quick glance at the code
>>>> it looks to me like the only "copy_failures" are the enqueue failures, in
>>>> which case the stat should just be renamed.
>>>>
>>>>> This patch adds enqueue drop statistic to fix the it.
>>>>>
>>>>> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
>>>>> ---
>>>>> examples/dma/dmafwd.c | 14 +++++++++-----
>>>>> 1 file changed, 9 insertions(+), 5 deletions(-)
>>>>>
>>>>
>>>> .
>>>>
>>>
>>
>> .
>>
>
>
> .
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 0/3] bugfix and enhance features for DMA example
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
` (4 preceding siblings ...)
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
@ 2022-04-24 6:07 ` Chengwen Feng
2022-04-24 6:07 ` [PATCH v3 1/3] examples/dma: fix MTU configuration Chengwen Feng
` (3 more replies)
5 siblings, 4 replies; 27+ messages in thread
From: Chengwen Feng @ 2022-04-24 6:07 UTC (permalink / raw)
To: thomas, ferruh.yigit, kevin.laatz, bruce.richardson; +Cc: dev
This patch set contains three bugfix and enhanced feature patches.
Chengwen Feng (2):
examples/dma: fix Tx drop statistic is not collected
examples/dma: add force minimal copy size parameter
Huisong Li (1):
examples/dma: fix MTU configuration
---
v3:
* remove 'enqueue drop statistic' patch.
v2:
* add "force" prefix for the minimal copy size parameter to make it
clearer to user.
examples/dma/dmafwd.c | 100 ++++++++++++++++++++++++++++++------------
1 file changed, 71 insertions(+), 29 deletions(-)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 1/3] examples/dma: fix MTU configuration
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
@ 2022-04-24 6:07 ` Chengwen Feng
2022-05-11 2:26 ` fengchengwen
2022-04-24 6:07 ` [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
` (2 subsequent siblings)
3 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-24 6:07 UTC (permalink / raw)
To: thomas, ferruh.yigit, kevin.laatz, bruce.richardson; +Cc: dev
From: Huisong Li <lihuisong@huawei.com>
The MTU in dma App can be configured by 'max_frame_size' parameters which
have a default value(1518). It's not reasonable to use it directly as MTU.
This patch fix it.
Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
examples/dma/dmafwd.c | 43 +++++++++++++++++++++++++++++++++++++++----
1 file changed, 39 insertions(+), 4 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index 608487e35c..a03ca05129 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -117,7 +117,7 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE;
static volatile bool force_quit;
static uint32_t dma_batch_sz = MAX_PKT_BURST;
-static uint32_t max_frame_size = RTE_ETHER_MAX_LEN;
+static uint32_t max_frame_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
@@ -851,6 +851,38 @@ assign_rings(void)
}
/* >8 End of assigning ring structures for packet exchanging. */
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_frame_size == 0)
+ return 0;
+
+ if (max_frame_size < RTE_ETHER_MIN_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_frame_size - overhead_len;
+
+ return 0;
+}
+
/*
* Initializes a given port using global settings and with the RX buffers
* coming from the mbuf_pool passed as a parameter.
@@ -878,9 +910,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
struct rte_eth_dev_info dev_info;
int ret, i;
- if (max_frame_size > local_port_conf.rxmode.mtu)
- local_port_conf.rxmode.mtu = max_frame_size;
-
/* Skip ports that are not enabled */
if ((dma_enabled_port_mask & (1 << portid)) == 0) {
printf("Skipping disabled port %u\n", portid);
@@ -895,6 +924,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
rte_exit(EXIT_FAILURE, "Cannot get device info: %s, port=%u\n",
rte_strerror(-ret), portid);
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max frame size: %u (port %u)\n",
+ max_frame_size, portid);
+
local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
dev_info.flow_type_rss_offloads;
ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
2022-04-24 6:07 ` [PATCH v3 1/3] examples/dma: fix MTU configuration Chengwen Feng
@ 2022-04-24 6:07 ` Chengwen Feng
2022-04-27 10:54 ` Kevin Laatz
2022-04-24 6:07 ` [PATCH v3 3/3] examples/dma: add force minimal copy size parameter Chengwen Feng
2022-06-06 21:35 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Thomas Monjalon
3 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-24 6:07 UTC (permalink / raw)
To: thomas, ferruh.yigit, kevin.laatz, bruce.richardson; +Cc: dev
The Tx drop statistic was designed to collected by
rte_eth_dev_tx_buffer mechanism, but the application uses
rte_eth_tx_burst to send packets and this lead the Tx drop statistic
was not collected.
This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem.
Fixes: 632bcd9b5d4f ("examples/ioat: print statistics")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
examples/dma/dmafwd.c | 27 +++++----------------------
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index a03ca05129..dd576bcf77 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -122,7 +122,6 @@ static uint32_t max_frame_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
-static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
struct rte_mempool *dma_pktmbuf_pool;
/* Print out statistics for one port. */
@@ -484,10 +483,13 @@ dma_tx_port(struct rxtx_port_config *tx_config)
port_statistics.tx[tx_config->rxtx_port] += nb_tx;
- /* Free any unsent packets. */
- if (unlikely(nb_tx < nb_dq))
+ if (unlikely(nb_tx < nb_dq)) {
+ port_statistics.tx_dropped[tx_config->rxtx_port] +=
+ (nb_dq - nb_tx);
+ /* Free any unsent packets. */
rte_mempool_put_bulk(dma_pktmbuf_pool,
(void *)&mbufs[nb_tx], nb_dq - nb_tx);
+ }
}
}
/* >8 End of transmitting packets from dmadev. */
@@ -970,25 +972,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
"rte_eth_tx_queue_setup:err=%d,port=%u\n",
ret, portid);
- /* Initialize TX buffers */
- tx_buffer[portid] = rte_zmalloc_socket("tx_buffer",
- RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
- rte_eth_dev_socket_id(portid));
- if (tx_buffer[portid] == NULL)
- rte_exit(EXIT_FAILURE,
- "Cannot allocate buffer for tx on port %u\n",
- portid);
-
- rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST);
-
- ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid],
- rte_eth_tx_buffer_count_callback,
- &port_statistics.tx_dropped[portid]);
- if (ret < 0)
- rte_exit(EXIT_FAILURE,
- "Cannot set error callback for tx buffer on port %u\n",
- portid);
-
/* Start device. 8< */
ret = rte_eth_dev_start(portid);
if (ret < 0)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 3/3] examples/dma: add force minimal copy size parameter
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
2022-04-24 6:07 ` [PATCH v3 1/3] examples/dma: fix MTU configuration Chengwen Feng
2022-04-24 6:07 ` [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
@ 2022-04-24 6:07 ` Chengwen Feng
2022-04-27 10:54 ` Kevin Laatz
2022-06-06 21:35 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Thomas Monjalon
3 siblings, 1 reply; 27+ messages in thread
From: Chengwen Feng @ 2022-04-24 6:07 UTC (permalink / raw)
To: thomas, ferruh.yigit, kevin.laatz, bruce.richardson; +Cc: dev
This patch adds force minimal copy size parameter
(-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy
size will be the maximum of mbuf's data_len and this parameter.
This parameter was designed to compare the performance between CPU copy
and DMA copy. User could send small packets with a high rate to drive
the performance test.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
examples/dma/dmafwd.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
index dd576bcf77..67b5a9b22b 100644
--- a/examples/dma/dmafwd.c
+++ b/examples/dma/dmafwd.c
@@ -25,6 +25,7 @@
#define CMD_LINE_OPT_RING_SIZE "ring-size"
#define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size"
#define CMD_LINE_OPT_FRAME_SIZE "max-frame-size"
+#define CMD_LINE_OPT_FORCE_COPY_SIZE "force-min-copy-size"
#define CMD_LINE_OPT_STATS_INTERVAL "stats-interval"
/* configurable number of RX/TX ring descriptors */
@@ -118,6 +119,7 @@ static volatile bool force_quit;
static uint32_t dma_batch_sz = MAX_PKT_BURST;
static uint32_t max_frame_size;
+static uint32_t force_min_copy_size;
/* ethernet addresses of ports */
static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
@@ -205,7 +207,13 @@ print_stats(char *prgname)
"Rx Queues = %d, ", nb_queues);
status_strlen += snprintf(status_string + status_strlen,
sizeof(status_string) - status_strlen,
- "Ring Size = %d", ring_size);
+ "Ring Size = %d\n", ring_size);
+ status_strlen += snprintf(status_string + status_strlen,
+ sizeof(status_string) - status_strlen,
+ "Force Min Copy Size = %u Packet Data Room Size = %u",
+ force_min_copy_size,
+ rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
+ RTE_PKTMBUF_HEADROOM);
memset(&ts, 0, sizeof(struct total_statistics));
@@ -303,7 +311,8 @@ static inline void
pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst)
{
rte_memcpy(rte_pktmbuf_mtod(dst, char *),
- rte_pktmbuf_mtod(src, char *), src->data_len);
+ rte_pktmbuf_mtod(src, char *),
+ RTE_MAX(src->data_len, force_min_copy_size));
}
/* >8 End of perform packet copy there is a user-defined function. */
@@ -320,7 +329,9 @@ dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[],
ret = rte_dma_copy(dev_id, 0,
rte_pktmbuf_iova(pkts[i]),
rte_pktmbuf_iova(pkts_copy[i]),
- rte_pktmbuf_data_len(pkts[i]), 0);
+ RTE_MAX(rte_pktmbuf_data_len(pkts[i]),
+ force_min_copy_size),
+ 0);
if (ret < 0)
break;
@@ -572,6 +583,7 @@ dma_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
" -b --dma-batch-size: number of requests per DMA batch\n"
" -f --max-frame-size: max frame size\n"
+ " -m --force-min-copy-size: force a minimum copy length, even for smaller packets\n"
" -p --portmask: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per port (default is 1)\n"
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
@@ -617,6 +629,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
"b:" /* dma batch size */
"c:" /* copy type (sw|hw) */
"f:" /* max frame size */
+ "m:" /* force min copy size */
"p:" /* portmask */
"q:" /* number of RX queues per port */
"s:" /* ring size */
@@ -632,6 +645,7 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
{CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'},
{CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'},
{CMD_LINE_OPT_FRAME_SIZE, required_argument, NULL, 'f'},
+ {CMD_LINE_OPT_FORCE_COPY_SIZE, required_argument, NULL, 'm'},
{CMD_LINE_OPT_STATS_INTERVAL, required_argument, NULL, 'i'},
{NULL, 0, 0, 0}
};
@@ -666,6 +680,10 @@ dma_parse_args(int argc, char **argv, unsigned int nb_ports)
}
break;
+ case 'm':
+ force_min_copy_size = atoi(optarg);
+ break;
+
/* portmask */
case 'p':
dma_enabled_port_mask = dma_parse_portmask(optarg);
@@ -1064,6 +1082,12 @@ main(int argc, char **argv)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
/* >8 End of allocates mempool to hold the mbufs. */
+ if (force_min_copy_size >
+ (uint32_t)(rte_pktmbuf_data_room_size(dma_pktmbuf_pool) -
+ RTE_PKTMBUF_HEADROOM))
+ rte_exit(EXIT_FAILURE,
+ "Force min copy size > packet mbuf size\n");
+
/* Initialize each port. 8< */
cfg.nb_ports = 0;
RTE_ETH_FOREACH_DEV(portid)
--
2.33.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected
2022-04-24 6:07 ` [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
@ 2022-04-27 10:54 ` Kevin Laatz
0 siblings, 0 replies; 27+ messages in thread
From: Kevin Laatz @ 2022-04-27 10:54 UTC (permalink / raw)
To: Chengwen Feng, thomas, ferruh.yigit, bruce.richardson; +Cc: dev
On 24/04/2022 07:07, Chengwen Feng wrote:
> The Tx drop statistic was designed to collected by
> rte_eth_dev_tx_buffer mechanism, but the application uses
> rte_eth_tx_burst to send packets and this lead the Tx drop statistic
> was not collected.
>
> This patch removes rte_eth_dev_tx_buffer mechanism to fix the problem.
>
> Fixes: 632bcd9b5d4f ("examples/ioat: print statistics")
> Cc: stable@dpdk.org
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> examples/dma/dmafwd.c | 27 +++++----------------------
> 1 file changed, 5 insertions(+), 22 deletions(-)
>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 3/3] examples/dma: add force minimal copy size parameter
2022-04-24 6:07 ` [PATCH v3 3/3] examples/dma: add force minimal copy size parameter Chengwen Feng
@ 2022-04-27 10:54 ` Kevin Laatz
0 siblings, 0 replies; 27+ messages in thread
From: Kevin Laatz @ 2022-04-27 10:54 UTC (permalink / raw)
To: Chengwen Feng, thomas, ferruh.yigit, bruce.richardson; +Cc: dev
On 24/04/2022 07:07, Chengwen Feng wrote:
> This patch adds force minimal copy size parameter
> (-m/--force-min-copy-size), so when do copy by CPU or DMA, the real copy
> size will be the maximum of mbuf's data_len and this parameter.
>
> This parameter was designed to compare the performance between CPU copy
> and DMA copy. User could send small packets with a high rate to drive
> the performance test.
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> examples/dma/dmafwd.c | 30 +++++++++++++++++++++++++++---
> 1 file changed, 27 insertions(+), 3 deletions(-)
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 1/3] examples/dma: fix MTU configuration
2022-04-24 6:07 ` [PATCH v3 1/3] examples/dma: fix MTU configuration Chengwen Feng
@ 2022-05-11 2:26 ` fengchengwen
0 siblings, 0 replies; 27+ messages in thread
From: fengchengwen @ 2022-05-11 2:26 UTC (permalink / raw)
To: thomas, ferruh.yigit, kevin.laatz, bruce.richardson; +Cc: dev
Hi Ferruh
Could you please review this patch ?
I notice many examples (e.g. l3fwd/l3fwd-acl/l3fwd-power/l3fwd-thread/l3fwd-graph)
use the same approach (which define eth_dev_get_overhead_len/config_port_max_pkt_len).
Thanks.
On 2022/4/24 14:07, Chengwen Feng wrote:
> From: Huisong Li <lihuisong@huawei.com>
>
> The MTU in dma App can be configured by 'max_frame_size' parameters which
> have a default value(1518). It's not reasonable to use it directly as MTU.
> This patch fix it.
>
> Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")
> Cc: stable@dpdk.org
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
> examples/dma/dmafwd.c | 43 +++++++++++++++++++++++++++++++++++++++----
> 1 file changed, 39 insertions(+), 4 deletions(-)
>
> diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
> index 608487e35c..a03ca05129 100644
> --- a/examples/dma/dmafwd.c
> +++ b/examples/dma/dmafwd.c
> @@ -117,7 +117,7 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE;
> static volatile bool force_quit;
>
> static uint32_t dma_batch_sz = MAX_PKT_BURST;
> -static uint32_t max_frame_size = RTE_ETHER_MAX_LEN;
> +static uint32_t max_frame_size;
>
> /* ethernet addresses of ports */
> static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS];
> @@ -851,6 +851,38 @@ assign_rings(void)
> }
> /* >8 End of assigning ring structures for packet exchanging. */
>
> +static uint32_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint32_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint32_t overhead_len;
> +
> + if (max_frame_size == 0)
> + return 0;
> +
> + if (max_frame_size < RTE_ETHER_MIN_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_frame_size - overhead_len;
> +
> + return 0;
> +}
> +
> /*
> * Initializes a given port using global settings and with the RX buffers
> * coming from the mbuf_pool passed as a parameter.
> @@ -878,9 +910,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
> struct rte_eth_dev_info dev_info;
> int ret, i;
>
> - if (max_frame_size > local_port_conf.rxmode.mtu)
> - local_port_conf.rxmode.mtu = max_frame_size;
> -
> /* Skip ports that are not enabled */
> if ((dma_enabled_port_mask & (1 << portid)) == 0) {
> printf("Skipping disabled port %u\n", portid);
> @@ -895,6 +924,12 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
> rte_exit(EXIT_FAILURE, "Cannot get device info: %s, port=%u\n",
> rte_strerror(-ret), portid);
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max frame size: %u (port %u)\n",
> + max_frame_size, portid);
> +
> local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
> dev_info.flow_type_rss_offloads;
> ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 0/3] bugfix and enhance features for DMA example
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
` (2 preceding siblings ...)
2022-04-24 6:07 ` [PATCH v3 3/3] examples/dma: add force minimal copy size parameter Chengwen Feng
@ 2022-06-06 21:35 ` Thomas Monjalon
3 siblings, 0 replies; 27+ messages in thread
From: Thomas Monjalon @ 2022-06-06 21:35 UTC (permalink / raw)
To: Chengwen Feng; +Cc: ferruh.yigit, kevin.laatz, bruce.richardson, dev
24/04/2022 08:07, Chengwen Feng:
> This patch set contains three bugfix and enhanced feature patches.
>
> Chengwen Feng (2):
> examples/dma: fix Tx drop statistic is not collected
> examples/dma: add force minimal copy size parameter
>
> Huisong Li (1):
> examples/dma: fix MTU configuration
Applied, thanks.
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2022-06-06 21:35 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-11 2:56 [PATCH 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-11 2:56 ` [PATCH 1/4] examples/dma: fix MTU configuration Chengwen Feng
2022-04-11 2:56 ` [PATCH 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
2022-04-11 2:56 ` [PATCH 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
2022-04-11 2:56 ` [PATCH 4/4] examples/dma: add minimal copy size parameter Chengwen Feng
2022-04-11 9:27 ` Bruce Richardson
2022-04-11 12:23 ` fengchengwen
2022-04-11 12:14 ` [PATCH v2 0/4] bugfix and enhance features for DMA example Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 1/4] examples/dma: fix MTU configuration Chengwen Feng
2022-04-11 12:14 ` [PATCH v2 2/4] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
2022-04-13 14:57 ` Bruce Richardson
2022-04-11 12:14 ` [PATCH v2 3/4] examples/dma: support enqueue drop statistic Chengwen Feng
2022-04-13 15:01 ` Bruce Richardson
2022-04-16 6:19 ` fengchengwen
2022-04-19 8:45 ` Bruce Richardson
2022-04-19 12:09 ` fengchengwen
2022-04-24 3:55 ` fengchengwen
2022-04-11 12:14 ` [PATCH v2 4/4] examples/dma: add force minimal copy size parameter Chengwen Feng
2022-04-13 15:04 ` Bruce Richardson
2022-04-24 6:07 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Chengwen Feng
2022-04-24 6:07 ` [PATCH v3 1/3] examples/dma: fix MTU configuration Chengwen Feng
2022-05-11 2:26 ` fengchengwen
2022-04-24 6:07 ` [PATCH v3 2/3] examples/dma: fix Tx drop statistic is not collected Chengwen Feng
2022-04-27 10:54 ` Kevin Laatz
2022-04-24 6:07 ` [PATCH v3 3/3] examples/dma: add force minimal copy size parameter Chengwen Feng
2022-04-27 10:54 ` Kevin Laatz
2022-06-06 21:35 ` [PATCH v3 0/3] bugfix and enhance features for DMA example Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).