From: Moti Haimovsky <motih@mellanox.com>
To: Matan Azrad <matan@mellanox.com>,
Matan Azrad <matan@mellanox.com>,
Wenzhuo Lu <wenzhuo.lu@intel.com>,
Jingjing Wu <jingjing.wu@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 1/2] app/testpmd: fix scatter offload configuration
Date: Tue, 30 Jul 2019 11:36:30 +0000 [thread overview]
Message-ID: <AM0PR05MB4435D7431A94DFC38E029D83D2DC0@AM0PR05MB4435.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <AM0PR0502MB401930DC963BCA1BB6E4DD46D2DC0@AM0PR0502MB4019.eurprd05.prod.outlook.com>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Matan Azrad
> > Sent: Monday, July 29, 2019 3:37 PM
> > To: Wenzhuo Lu <wenzhuo.lu@intel.com>; Jingjing Wu
> > <jingjing.wu@intel.com>
> > Cc: dev@dpdk.org; stable@dpdk.org
> > Subject: [dpdk-dev] [PATCH 1/2] app/testpmd: fix scatter offload
> > configuration
> >
> > When the mbuf data size cannot contain the maximum Rx packet length
> > with the mbuf headroom, a packet should be scattered in more than one
> mbuf.
> >
> > The application did not configure scatter offload in the above case.
> >
> > Enable the Rx scatter offload in the above case.
> >
> > Fixes: 33f9630fc23d ("app/testpmd: create mbuf based on max supported
> > segments")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Moti Haimovsky <motih@mellanox.com>
> > ---
> > app/test-pmd/testpmd.c | 11 +++++++++++
> > 1 file changed, 11 insertions(+)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 518865a..4ae70ef 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -1191,6 +1191,17 @@ struct extmem_param {
> > warning = 1;
> > }
> > }
> > + if (rx_mode.max_rx_pkt_len + RTE_PKTMBUF_HEADROOM
> > >
> > + mbuf_data_size) {
> > + if (port->dev_info.rx_queue_offload_capa &
> > + DEV_RX_OFFLOAD_SCATTER)
> > + port->dev_conf.rxmode.offloads |=
> > + DEV_RX_OFFLOAD_SCATTER;
> > + else
> > + TESTPMD_LOG(WARNING, "Configure
> > scatter is"
> > + " needed and cannot be
> > configured"
> > + " in the port %u\n", pid);
> > + }
> > }
> >
> > if (warning)
> > --
> > 1.8.3.1
next prev parent reply other threads:[~2019-07-30 11:36 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-29 12:36 Matan Azrad
2019-07-29 12:36 ` [dpdk-dev] [PATCH 2/2] app/testpmd: add bits per second to statistics Matan Azrad
2019-07-30 11:41 ` Moti Haimovsky
2019-10-08 14:19 ` Yigit, Ferruh
2019-07-30 9:00 ` [dpdk-dev] [PATCH 1/2] app/testpmd: fix scatter offload configuration Matan Azrad
2019-07-30 11:36 ` Moti Haimovsky [this message]
2019-07-30 13:09 ` Ferruh Yigit
2019-07-30 13:17 ` Matan Azrad
2019-07-30 15:21 ` Ferruh Yigit
2019-07-30 15:56 ` Matan Azrad
2019-07-30 17:28 ` Ferruh Yigit
2019-07-30 18:34 ` Matan Azrad
2019-07-30 18:55 ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
2019-07-31 6:11 ` Matan Azrad
2019-10-08 14:18 ` Yigit, Ferruh
2019-10-22 7:06 ` Matan Azrad
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM0PR05MB4435D7431A94DFC38E029D83D2DC0@AM0PR05MB4435.eurprd05.prod.outlook.com \
--to=motih@mellanox.com \
--cc=dev@dpdk.org \
--cc=jingjing.wu@intel.com \
--cc=matan@mellanox.com \
--cc=stable@dpdk.org \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).