From: Ferruh Yigit <ferruh.yigit@intel.com>
To: harish.patil@cavium.com, rasesh.mody@cavium.com
Cc: zhouyangchao <zhouyates@gmail.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] net/bnx2x: reserve enough headroom for mbuf prepend
Date: Fri, 20 Apr 2018 11:31:36 +0100 [thread overview]
Message-ID: <c4cb9a60-5233-1364-2ff3-d21265b66542@intel.com> (raw)
In-Reply-To: <CABLiTuxYAtevgtmusOZN7c8+o-rTt9bZ1B_MVx9pxCLLEKrbQw@mail.gmail.com>
On 3/8/2018 5:57 AM, zhouyangchao wrote:
> When allocating a new mbuf for Rx, the value of m->data_off should be
> reset to its default value (RTE_PKTMBUF_HEADROOM), instead of reusing
> the previous undefined value, which could cause the packet to have a too
> small or too high headroom.
Hi Harish, Rasesh,
Reminder of this patch waiting for your review?
>
> On Mon, Mar 5, 2018 at 11:28 PM Ferruh Yigit <ferruh.yigit@intel.com
> <mailto:ferruh.yigit@intel.com>> wrote:
>
> On 2/6/2018 11:21 AM, zhouyangchao wrote:
>
> Can you please provide more information why this patch is needed?
>
> > Signed-off-by: Yangchao Zhou <zhouyates@gmail.com
> <mailto:zhouyates@gmail.com>>
> > ---
> > drivers/net/bnx2x/bnx2x_rxtx.c | 8 +++++---
> > 1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/bnx2x/bnx2x_rxtx.c b/drivers/net/bnx2x/bnx2x_rxtx.c
> > index a0d4ac9..d8a3225 100644
> > --- a/drivers/net/bnx2x/bnx2x_rxtx.c
> > +++ b/drivers/net/bnx2x/bnx2x_rxtx.c
> > @@ -140,7 +140,8 @@ bnx2x_dev_rx_queue_setup(struct rte_eth_dev *dev,
> > return -ENOMEM;
> > }
> > rxq->sw_ring[idx] = mbuf;
> > - rxq->rx_ring[idx] = mbuf->buf_iova;
> > + rxq->rx_ring[idx] =
> > + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
> > }
> > rxq->pkt_first_seg = NULL;
> > rxq->pkt_last_seg = NULL;
> > @@ -400,7 +401,8 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
> >
> > rx_mb = rxq->sw_ring[bd_cons];
> > rxq->sw_ring[bd_cons] = new_mb;
> > - rxq->rx_ring[bd_prod] = new_mb->buf_iova;
> > + rxq->rx_ring[bd_prod] =
> > + rte_cpu_to_le_64(rte_mbuf_data_iova_default(new_mb));
> >
> > rx_pref = NEXT_RX_BD(bd_cons) & MAX_RX_BD(rxq);
> > rte_prefetch0(rxq->sw_ring[rx_pref]);
> > @@ -409,7 +411,7 @@ bnx2x_recv_pkts(void *p_rxq, struct rte_mbuf
> **rx_pkts, uint16_t nb_pkts)
> > rte_prefetch0(&rxq->sw_ring[rx_pref]);
> > }
> >
> > - rx_mb->data_off = pad;
> > + rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
> > rx_mb->nb_segs = 1;
> > rx_mb->next = NULL;
> > rx_mb->pkt_len = rx_mb->data_len = len;
> >
>
next prev parent reply other threads:[~2018-04-20 10:31 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-06 11:21 zhouyangchao
2018-02-06 13:44 ` Ferruh Yigit
2018-03-05 15:28 ` Ferruh Yigit
2018-03-08 5:57 ` zhouyangchao
2018-04-20 10:31 ` Ferruh Yigit [this message]
2018-04-23 17:10 ` Patil, Harish
2018-04-24 12:38 ` Ferruh Yigit
-- strict thread matches above, loose matches on Subject: below --
2018-02-06 11:21 zhouyangchao
2018-02-06 11:21 zhouyangchao
2018-02-06 11:20 zhouyangchao
2018-02-06 11:20 zhouyangchao
2018-02-06 11:20 zhouyangchao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c4cb9a60-5233-1364-2ff3-d21265b66542@intel.com \
--to=ferruh.yigit@intel.com \
--cc=dev@dpdk.org \
--cc=harish.patil@cavium.com \
--cc=rasesh.mody@cavium.com \
--cc=zhouyates@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).