From: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
To: dev@dpdk.org
Cc: hujiayu.hu@foxmail.com, roretzla@linux.microsoft.com,
bruce.richardson@intel.com, anatoly.burakov@intel.com,
vladimir.medvedkin@intel.com,
Konstantin Ananyev <konstantin.ananyev@huawei.com>
Subject: [RFC 4/4] net/ice: remove use of VLAs
Date: Thu, 23 May 2024 17:26:04 +0100 [thread overview]
Message-ID: <20240523162604.2600-5-konstantin.v.ananyev@yandex.ru> (raw)
In-Reply-To: <20240523162604.2600-1-konstantin.v.ananyev@yandex.ru>
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
../drivers/net/ice/ice_rxtx.c:1871:29: warning: variable length array used [-Wvla]
Here VLA is used as a temp array for mbufs that will be used as a split
RX data buffers.
As at any given time only one thread can do RX from particular queue,
at rx_queue_setup() we can allocate extra space for that array, and then
safely use it at RX fast-path.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
---
drivers/net/ice/ice_rxtx.c | 18 ++++++++++++------
drivers/net/ice/ice_rxtx.h | 2 ++
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 95a2db3432..6395a3b50a 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1171,7 +1171,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
struct ice_vsi *vsi = pf->main_vsi;
struct ice_rx_queue *rxq;
const struct rte_memzone *rz;
- uint32_t ring_size;
+ uint32_t ring_size, tlen;
uint16_t len;
int use_def_burst_func = 1;
uint64_t offloads;
@@ -1279,9 +1279,14 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
/* always reserve more for bulk alloc */
len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
+ /* allocate extra entries for SW split buffer */
+ tlen = ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0) ?
+ rxq->rx_free_thresh : 0;
+ tlen += len;
+
/* Allocate the software ring. */
rxq->sw_ring = rte_zmalloc_socket(NULL,
- sizeof(struct ice_rx_entry) * len,
+ sizeof(struct ice_rx_entry) * tlen,
RTE_CACHE_LINE_SIZE,
socket_id);
if (!rxq->sw_ring) {
@@ -1290,6 +1295,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ rxq->sw_split_buf = (tlen == len) ? NULL : rxq->sw_ring + len;
+
ice_reset_rx_queue(rxq);
rxq->q_set = true;
dev->data->rx_queues[queue_idx] = rxq;
@@ -1868,7 +1875,6 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
uint64_t dma_addr;
int diag, diag_pay;
uint64_t pay_addr;
- struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh];
/* Allocate buffers in bulk */
alloc_idx = (uint16_t)(rxq->rx_free_trigger -
@@ -1883,7 +1889,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp,
- (void *)mbufs_pay, rxq->rx_free_thresh);
+ (void *)rxq->sw_split_buf, rxq->rx_free_thresh);
if (unlikely(diag_pay != 0)) {
PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk");
return -ENOMEM;
@@ -1908,8 +1914,8 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
rxdp[i].read.hdr_addr = 0;
rxdp[i].read.pkt_addr = dma_addr;
} else {
- mb->next = mbufs_pay[i];
- pay_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbufs_pay[i]));
+ mb->next = rxq->sw_split_buf[i].mbuf;
+ pay_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb->next));
rxdp[i].read.hdr_addr = dma_addr;
rxdp[i].read.pkt_addr = pay_addr;
}
diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h
index f7276cfc9f..d0f0b6c1d2 100644
--- a/drivers/net/ice/ice_rxtx.h
+++ b/drivers/net/ice/ice_rxtx.h
@@ -139,6 +139,8 @@ struct ice_rx_queue {
uint32_t hw_time_high; /* high 32 bits of timestamp */
uint32_t hw_time_low; /* low 32 bits of timestamp */
uint64_t hw_time_update; /* SW time of HW record updating */
+ struct ice_rx_entry *sw_split_buf;
+ /* address of temp buffer for RX split mbufs */
struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG];
uint32_t rxseg_nb;
bool ts_enable; /* if rxq timestamp is enabled */
--
2.35.3
next prev parent reply other threads:[~2024-05-23 16:29 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-23 16:26 [RFC 0/4] remove use of VLA Konstantin Ananyev
2024-05-23 16:26 ` [RFC 1/4] gro: fix overwrite unprocessed packets Konstantin Ananyev
2024-06-12 0:48 ` Ferruh Yigit
2024-05-23 16:26 ` [RFC 2/4] gro: remove use of VLAs Konstantin Ananyev
2024-06-12 0:48 ` Ferruh Yigit
2024-06-13 10:20 ` Konstantin Ananyev
2024-06-14 15:11 ` Ferruh Yigit
2024-06-28 12:57 ` Konstantin Ananyev
2024-05-23 16:26 ` [RFC 3/4] net/ixgbe: " Konstantin Ananyev
2024-06-12 1:00 ` Ferruh Yigit
2024-05-23 16:26 ` Konstantin Ananyev [this message]
2024-06-12 1:12 ` [RFC 4/4] net/ice: " Ferruh Yigit
2024-06-13 10:32 ` Konstantin Ananyev
2024-06-14 15:31 ` Ferruh Yigit
2024-06-12 1:14 ` [RFC 0/4] remove use of VLA Ferruh Yigit
2024-06-13 10:43 ` Konstantin Ananyev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240523162604.2600-5-konstantin.v.ananyev@yandex.ru \
--to=konstantin.v.ananyev@yandex.ru \
--cc=anatoly.burakov@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=hujiayu.hu@foxmail.com \
--cc=konstantin.ananyev@huawei.com \
--cc=roretzla@linux.microsoft.com \
--cc=vladimir.medvedkin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).