From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: "Stillwell Jr, Paul M" <paul.m.stillwell.jr@intel.com>,
"Lu, Wenzhuo" <wenzhuo.lu@intel.com>,
"Yang, Qiming" <qiming.yang@intel.com>
Cc: "Stokes, Ian" <ian.stokes@intel.com>,
"Yigit, Ferruh" <ferruh.yigit@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] net/ice: set min and max MTU
Date: Mon, 6 May 2019 23:10:54 +0000 [thread overview]
Message-ID: <039ED4275CED7440929022BC67E706115337CB21@SHSMSX103.ccr.corp.intel.com> (raw)
Message-ID: <20190506231054.80R9czcMTbCwG-V7gqV6qXCnJ4odVymUHFdkXB6-0D0@z> (raw)
In-Reply-To: <F8A4ECA1C1D86B4081DD6BF503D1F436C749479B@fmsmsx120.amr.corp.intel.com>
> -----Original Message-----
> From: Stillwell Jr, Paul M
> Sent: Tuesday, May 7, 2019 1:39 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Yang, Qiming <qiming.yang@intel.com>
> Cc: Stokes, Ian <ian.stokes@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: RE: [dpdk-dev] [PATCH] net/ice: set min and max MTU
>
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Qi Zhang
> > Sent: Saturday, May 4, 2019 2:46 AM
> > To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>
> > Cc: Stokes, Ian <ian.stokes@intel.com>; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; dev@dpdk.org; Zhang, Qi Z
> > <qi.z.zhang@intel.com>
> > Subject: [dpdk-dev] [PATCH] net/ice: set min and max MTU
> >
> > This commit sets the min and max supported MTU values for ice devices
> > via the
> > i40e_dev_info_get() function. Min MTU supported is set to
> > ETHER_MIN_MTU
>
> Should this be ice_dev_info_get()?
Ah yes, will fix in v2, thanks
>
> > and max mtu is calculated as the max packet length supported minus the
> > transport overhead.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> > drivers/net/ice/ice_ethdev.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/net/ice/ice_ethdev.c
> > b/drivers/net/ice/ice_ethdev.c index
> > 1f06a2c80..9f5f919f4 100644
> > --- a/drivers/net/ice/ice_ethdev.c
> > +++ b/drivers/net/ice/ice_ethdev.c
> > @@ -1994,6 +1994,8 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct
> > rte_eth_dev_info *dev_info)
> > dev_info->max_tx_queues = vsi->nb_qps;
> > dev_info->max_mac_addrs = vsi->max_macaddrs;
> > dev_info->max_vfs = pci_dev->max_vfs;
> > + dev_info->max_mtu = dev_info->max_rx_pktlen - ICE_ETH_OVERHEAD;
> > + dev_info->min_mtu = ETHER_MIN_MTU;
> >
> > dev_info->rx_offload_capa =
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > --
> > 2.13.6
next prev parent reply other threads:[~2019-05-06 23:11 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-04 9:45 Qi Zhang
2019-05-04 9:45 ` Qi Zhang
2019-05-06 17:39 ` Stillwell Jr, Paul M
2019-05-06 17:39 ` Stillwell Jr, Paul M
2019-05-06 23:10 ` Zhang, Qi Z [this message]
2019-05-06 23:10 ` Zhang, Qi Z
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=039ED4275CED7440929022BC67E706115337CB21@SHSMSX103.ccr.corp.intel.com \
--to=qi.z.zhang@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=ian.stokes@intel.com \
--cc=paul.m.stillwell.jr@intel.com \
--cc=qiming.yang@intel.com \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).