From mboxrd@z Thu Jan 1 00:00:00 1970
Return-Path:
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
by inbox.dpdk.org (Postfix) with ESMTP id C44C345FCB;
Thu, 2 Jan 2025 20:48:18 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
by mails.dpdk.org (Postfix) with ESMTP id 5C82A402E0;
Thu, 2 Jan 2025 20:48:18 +0100 (CET)
Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178])
by mails.dpdk.org (Postfix) with ESMTP id 86CA4402B4
for ; Thu, 2 Jan 2025 20:48:17 +0100 (CET)
Received: by inbox.dpdk.org (Postfix, from userid 33)
id 72B2B45FCC; Thu, 2 Jan 2025 20:48:17 +0100 (CET)
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [DPDK/ethdev Bug 1609] memif jumbo support broken
Date: Thu, 02 Jan 2025 19:48:16 +0000
X-Bugzilla-Reason: AssignedTo
X-Bugzilla-Type: new
X-Bugzilla-Watch-Reason: None
X-Bugzilla-Product: DPDK
X-Bugzilla-Component: ethdev
X-Bugzilla-Version: 23.11
X-Bugzilla-Keywords:
X-Bugzilla-Severity: normal
X-Bugzilla-Who: bly454@gmail.com
X-Bugzilla-Status: UNCONFIRMED
X-Bugzilla-Resolution:
X-Bugzilla-Priority: Normal
X-Bugzilla-Assigned-To: dev@dpdk.org
X-Bugzilla-Target-Milestone: ---
X-Bugzilla-Flags:
X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform
op_sys bug_status bug_severity priority component assigned_to reporter
target_milestone
Message-ID:
Content-Type: multipart/alternative; boundary=17358472970.4Cba.1195485
Content-Transfer-Encoding: 7bit
X-Bugzilla-URL: http://bugs.dpdk.org/
Auto-Submitted: auto-generated
X-Auto-Response-Suppress: All
MIME-Version: 1.0
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions
List-Unsubscribe: ,
List-Archive:
List-Post:
List-Help:
List-Subscribe: ,
Errors-To: dev-bounces@dpdk.org
--17358472970.4Cba.1195485
Date: Thu, 2 Jan 2025 20:48:17 +0100
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Bugzilla-URL: http://bugs.dpdk.org/
Auto-Submitted: auto-generated
X-Auto-Response-Suppress: All
https://bugs.dpdk.org/show_bug.cgi?id=3D1609
Bug ID: 1609
Summary: memif jumbo support broken
Product: DPDK
Version: 23.11
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: ethdev
Assignee: dev@dpdk.org
Reporter: bly454@gmail.com
Target Milestone: ---
We just completed our upgrade from DPDK 21.11.2 to 23.11.1. Our testing fou=
nd a
defect with the current 23.11 code. This can/may impact other releases.
Please review the "dst_off" changes below, which restore jumbo (frames larg=
er
than 2KB) support relative to multiple memif buffer handling. You will also
note we have disabled the new "bulk" functionality as we have not had time =
to
review it. For now, we have disabled it in preference to using the original
"else" code with these fixes. Similar fixes/logic should be confirmed prese=
nt
as well in VPP's libmemif solution.
We recommend a new UT be added, which tests randomly sized frames consistin=
g of
1, 2 & 3 memif buffers to validate jumbo frame support.
diff --git a/drivers/net/memif/rte_eth_memif.c
b/drivers/net/memif/rte_eth_memif.c
index 2c2fafadf9..4a3a46c34a 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -357,7 +357,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint1=
6_t
nb_pkts)
goto refill;
n_slots =3D (last_slot - cur_slot) & mask;
- if (likely(mbuf_size >=3D pmd->cfg.pkt_buffer_size)) {
+ if (0 /*likely(mbuf_size >=3D pmd->cfg.pkt_buffer_size)*/) {
struct rte_mbuf *mbufs[MAX_PKT_BURST];
next_bulk:
ret =3D rte_pktmbuf_alloc_bulk(mq->mempool, mbufs,
MAX_PKT_BURST);
@@ -428,12 +428,12 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs,
uint16_t nb_pkts)
mbuf =3D mbuf_head;
mbuf->port =3D mq->in_port;
+ dst_off =3D 0;
next_slot2:
s0 =3D cur_slot & mask;
d0 =3D &ring->desc[s0];
src_len =3D d0->length;
- dst_off =3D 0;
src_off =3D 0;
do {
@@ -722,7 +722,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint1=
6_t
nb_pkts)
}
uint16_t mbuf_size =3D rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM;
- if (i =3D=3D nb_pkts && pmd->cfg.pkt_buffer_size >=3D mbuf_size) {
+ if ( 0 /*i =3D=3D nb_pkts && pmd->cfg.pkt_buffer_size >=3D mbuf_siz=
e*/) {
buf_tmp =3D bufs;
while (n_tx_pkts < nb_pkts && n_free) {
mbuf_head =3D *bufs++;
@@ -772,6 +772,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint1=
6_t
nb_pkts)
dst_off =3D 0;
dst_len =3D (type =3D=3D MEMIF_RING_C2S) ?
pmd->run.pkt_buffer_size : d0->length;
+ d0->flags =3D 0;
next_in_chain2:
src_off =3D 0;
--=20
You are receiving this mail because:
You are the assignee for the bug.=
--17358472970.4Cba.1195485
Date: Thu, 2 Jan 2025 20:48:17 +0100
MIME-Version: 1.0
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Bugzilla-URL: http://bugs.dpdk.org/
Auto-Submitted: auto-generated
X-Auto-Response-Suppress: All
We just completed our upgrade from=
DPDK 21.11.2 to 23.11.1. Our testing found a
defect with the current 23.11 code. This can/may impact other releases.
Please review the "dst_off" changes below, which restore jumbo (f=
rames larger
than 2KB) support relative to multiple memif buffer handling. You will also
note we have disabled the new "bulk" functionality as we have not=
had time to
review it. For now, we have disabled it in preference to using the original
"else" code with these fixes. Similar fixes/logic should be confi=
rmed present
as well in VPP's libmemif solution.
We recommend a new UT be added, which tests randomly sized frames consistin=
g of
1, 2 & 3 memif buffers to validate jumbo frame support.
diff --git a/drivers/net/memif/rte_eth_memif.c
b/drivers/net/memif/rte_eth_memif.c
index 2c2fafadf9..4a3a46c34a 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -357,7 +357,7 @@ eth_memif_rx(void *queue, struct rte_mb=
uf **bufs, uint16_t
nb_pkts)
goto refill;
n_slots =3D (last_slot - cur_slot) & mask;
- if (likely(mbuf_size >=3D pmd->cfg.pkt_buffer_size)) {
+ if (0 /*likely(mbuf_size >=3D pmd->cfg.pkt_buffer_size)*/) {
struct rte_mbuf *mbufs[MAX_PKT_BURST];
next_bulk:
ret =3D rte_pktmbuf_alloc_bulk(mq->mempool, mbufs,
MAX_PKT_BURST);
@@ -428,12 +428,12 @@ eth_memif_rx(void *queue, struct rte_=
mbuf **bufs,
uint16_t nb_pkts)
mbuf =3D mbuf_head;
mbuf->port =3D mq->in_port;
+ dst_off =3D 0;
next_slot2:
s0 =3D cur_slot & mask;
d0 =3D &ring->desc[s0];
src_len =3D d0->length;
- dst_off =3D 0;
src_off =3D 0;
do {
@@ -722,7 +722,7 @@ eth_memif_tx(void *queue, struct rte_mb=
uf **bufs, uint16_t
nb_pkts)
}
uint16_t mbuf_size =3D rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM;
- if (i =3D=3D nb_pkts && pmd->cfg.pkt_buffer_size >=3D=
mbuf_size) {
+ if ( 0 /*i =3D=3D nb_pkts && pmd->cfg.pkt_buffer_size &g=
t;=3D mbuf_size*/) {
buf_tmp =3D bufs;
while (n_tx_pkts < nb_pkts && n_free) {
mbuf_head =3D *bufs++;
@@ -772,6 +772,7 @@ eth_memif_tx(void *queue, struct rte_mb=
uf **bufs, uint16_t
nb_pkts)
dst_off =3D 0;
dst_len =3D (type =3D=3D MEMIF_RING_C2S) ?
pmd->run.pkt_buffer_size : d0->length;
+ d0->flags =3D 0;
next_in_chain2:
src_off =3D 0;