Bug ID 1609
Summary memif jumbo support broken
Product DPDK
Version 23.11
Hardware All
OS All
Status UNCONFIRMED
Severity normal
Priority Normal
Component ethdev
Assignee dev@dpdk.org
Reporter bly454@gmail.com
Target Milestone ---

We just completed our upgrade from DPDK 21.11.2 to 23.11.1. Our testing found a
defect with the current 23.11 code. This can/may impact other releases.

Please review the "dst_off" changes below, which restore jumbo (frames larger
than 2KB) support relative to multiple memif buffer handling. You will also
note we have disabled the new "bulk" functionality as we have not had time to
review it. For now, we have disabled it in preference to using the original
"else" code with these fixes. Similar fixes/logic should be confirmed present
as well in VPP's libmemif solution.

We recommend a new UT be added, which tests randomly sized frames consisting of
1, 2 & 3 memif buffers to validate jumbo frame support.

diff --git a/drivers/net/memif/rte_eth_memif.c
b/drivers/net/memif/rte_eth_memif.c
index 2c2fafadf9..4a3a46c34a 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -357,7 +357,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t
nb_pkts)
                goto refill;
        n_slots = (last_slot - cur_slot) & mask;

-       if (likely(mbuf_size >= pmd->cfg.pkt_buffer_size)) {
+       if (0 /*likely(mbuf_size >= pmd->cfg.pkt_buffer_size)*/) {
                struct rte_mbuf *mbufs[MAX_PKT_BURST];
 next_bulk:
                ret = rte_pktmbuf_alloc_bulk(mq->mempool, mbufs,
MAX_PKT_BURST);
@@ -428,12 +428,12 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs,
uint16_t nb_pkts)
                        mbuf = mbuf_head;
                        mbuf->port = mq->in_port;

+                       dst_off = 0;
 next_slot2:
                        s0 = cur_slot & mask;
                        d0 = &ring->desc[s0];

                        src_len = d0->length;
-                       dst_off = 0;
                        src_off = 0;

                        do {
@@ -722,7 +722,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t
nb_pkts)
        }

        uint16_t mbuf_size = rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM;
-       if (i == nb_pkts && pmd->cfg.pkt_buffer_size >= mbuf_size) {
+       if ( 0 /*i == nb_pkts && pmd->cfg.pkt_buffer_size >= mbuf_size*/) {
                buf_tmp = bufs;
                while (n_tx_pkts < nb_pkts && n_free) {
                        mbuf_head = *bufs++;
@@ -772,6 +772,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t
nb_pkts)
                        dst_off = 0;
                        dst_len = (type == MEMIF_RING_C2S) ?
                                pmd->run.pkt_buffer_size : d0->length;
+                       d0->flags = 0;

 next_in_chain2:
                        src_off = 0;
          


You are receiving this mail because: