From: Junxiao Shi <git@mail1.yoursunny.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH] net/memif: fix chained mbuf determination
Date: Thu, 9 Sep 2021 14:42:06 +0000 [thread overview]
Message-ID: <973f32e49849ad68@cs.arizona.edu> (raw)
Previously, TX functions call rte_pktmbuf_is_contiguous to determine
whether an mbuf is chained. However, rte_pktmbuf_is_contiguous is
designed to work on the first mbuf of a packet only. In case a packet
contains three or more segment mbufs in a chain, it may cause truncated
packets or rte_mbuf_sanity_check panics.
This patch updates TX functions to determine chained mbufs using
mbuf_head->nb_segs field, which works in all cases. Moreover, it
maintains that the second cacheline is only accessed when chained mbuf
is actually present.
Signed-off-by: Junxiao Shi <git@mail1.yoursunny.com>
---
drivers/net/memif/rte_eth_memif.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e..fd9e877c3d 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -199,6 +199,7 @@ memif_dev_info(struct rte_eth_dev *dev __rte_unused, struct rte_eth_dev_info *de
dev_info->max_rx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->max_tx_queues = ETH_MEMIF_MAX_NUM_Q_PAIRS;
dev_info->min_rx_bufsize = 0;
+ dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
return 0;
}
@@ -567,7 +568,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
rte_eth_devices[mq->in_port].process_private;
memif_ring_t *ring = memif_get_ring_from_queue(proc_private, mq);
uint16_t slot, saved_slot, n_free, ring_size, mask, n_tx_pkts = 0;
- uint16_t src_len, src_off, dst_len, dst_off, cp_len;
+ uint16_t src_len, src_off, dst_len, dst_off, cp_len, nb_segs;
memif_ring_type_t type = mq->type;
memif_desc_t *d0;
struct rte_mbuf *mbuf;
@@ -615,6 +616,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
while (n_tx_pkts < nb_pkts && n_free) {
mbuf_head = *bufs++;
+ nb_segs = mbuf_head->nb_segs;
mbuf = mbuf_head;
saved_slot = slot;
@@ -659,7 +661,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
d0->length = dst_off;
}
- if (rte_pktmbuf_is_contiguous(mbuf) == 0) {
+ if (--nb_segs > 0) {
mbuf = mbuf->next;
goto next_in_chain;
}
@@ -696,6 +698,7 @@ memif_tx_one_zc(struct pmd_process_private *proc_private, struct memif_queue *mq
uint16_t slot, uint16_t n_free)
{
memif_desc_t *d0;
+ uint16_t nb_segs = mbuf->nb_segs;
int used_slots = 1;
next_in_chain:
@@ -716,7 +719,7 @@ memif_tx_one_zc(struct pmd_process_private *proc_private, struct memif_queue *mq
d0->flags = 0;
/* check if buffer is chained */
- if (rte_pktmbuf_is_contiguous(mbuf) == 0) {
+ if (--nb_segs > 0) {
if (n_free < 2)
return 0;
/* mark buffer as chained */
--
2.17.1
next reply other threads:[~2021-09-09 14:46 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-09 14:42 Junxiao Shi [this message]
2021-09-20 15:20 ` Ferruh Yigit
2021-09-22 6:19 ` Jakub Grajciar -X (jgrajcia - PANTHEON TECH SRO at Cisco)
2021-09-27 15:25 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=973f32e49849ad68@cs.arizona.edu \
--to=git@mail1.yoursunny.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).