DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zhao1, Wei" <wei.zhao1@intel.com>
To: Hideyuki Yamashita <yamashita.hideyuki@po.ntt-tx.co.jp>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Question about jumbo frame support on ixgbe
Date: Mon, 5 Nov 2018 09:47:59 +0000	[thread overview]
Message-ID: <A2573D2ACFCADC41BB3BE09C6DE313CA07E69049@PGSMSX103.gar.corp.intel.com> (raw)
In-Reply-To: <201811020139.wA21cx3s022715@ccmail04.silk.ntt-tx.co.jp>

Hi,Hideyuki Yamashita

> -----Original Message-----
> From: Hideyuki Yamashita [mailto:yamashita.hideyuki@po.ntt-tx.co.jp]
> Sent: Friday, November 2, 2018 9:38 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] Question about jumbo frame support on ixgbe
> 
> Hi
> 
> Thanks for your answering to my question.
> Please see inline.
> > Hi,  Hideyuki Yamashita
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki
> > > Yamashita
> > > Sent: Wednesday, October 31, 2018 4:22 PM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] Question about jumbo frame support on ixgbe
> > >
> > > Hi,
> > >
> > > I have a very basic question about jumbo frame support for ixgbe.
> > >
> > > I understand that some drivers support jumbo frame and if it receive
> > > jumbo packet (greater than 1500 byte), it creates mbuf chains and
> > > pass it to DPDK application through e.g. rte_eth_rx_burst.
> > >
> > > However it looks that ixgbe driver does not support jumbo frame.
> > >
> > > Q1. Is my understanding above correct?
> > > Q2. If A1 equals YES, then are there any future plan to support
> > > jumbo frame on ixgbe?
> >
> > Your understanding above correct, but 82599 and x550 has support jumbo
> frame receive by now!
> > In order to use this feature on ixgbe, you need do the following steps:
> >
> > 1. you must set dev_conf.rxmode. max_rx_pkt_len to a big number, eg.
> 9500, when doing port start, for example when start port in API
> rte_eth_dev_start().
> > ixgbe_dev_rx_init() will chose a scatter receive function if the
> > max_rx_pkt_len is larger than the mbuf size, you do not need to set
> DEV_RX_OFFLOAD_SCATTER bit in dev_conf.rxmode.offloads, this is the
> work of PMD driver when it detect jumbo frame is needed to be supported.
> Thanks for your info.
> 
> > 2. set dev_conf.txmode.offloads bit of  DEV_TX_OFFLOAD_MULTI_SEGS to
> 1 when doing rte_eth_tx_queue_setup() or , this is very important!!
> > If you not do this, you may only receive JUMBO frame but not forward
> > out. Because as you say, the receive packets maybe has a mbuf
> chains(depending on the size relationship of mbuf size and the
> max_rx_pkt_len).
> > But in ixgbe PMD for setting tx function, it is confusing  in
> ixgbe_set_tx_function(), you need  to take care of it, it is based on queue
> offloads bit!
> What will happen if DEV_TX_OFFLOAD_MULTI_SEGS is set to 1.
> Packets are sent fragmented or re-built as a Jumbo frame and sent to
> network?
> (My guess is former(packet will be sent fragmented though))

Ixgbe will store the jumbo frame into several segments in mbuf when receiving from network,
It will not be rebuild into one jumbo packets in PMD, so ixgbe will do that work when transmitting it.
PMD need the flag for indication of this packet is Jumbo and do specific work,  DEV_TX_OFFLOAD_MULTI_SEGS
Can tell PMD chose a purpose tx function for doing these work.
 
> 
> > 3. enable it using CLI "port config mtu <port_id> <value>" if you are using
> testpmd or using API rte_eth_dev_set_mtu() for your own APP.
> > The mtu number is just what you need to update for a large number.
> I want to know the relationship between 1,2 and 3.
> Do I have to do all 3 steps to send/receive jumbo frame?
> Or, when I relalize jumbo frame support programatically execute 1 and 2, if I
> do not modify program and just change setting via CLI then execute 3?

If you are use testpmd APP, you can excute (3). If not, you can set dev_conf.rxmode.offloads bits of DEV_RX_OFFLOAD_JUMBO_FRAME 
to 1 when you start ixgbe PMD code. 1-2-3 are all steps for configuration of registers for JUMBO frame enable.


> 
> Thanks and BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > And all my discussion is based on pf port, if you are using vf, we can have a
> further discussion.
> > Please feel free to contact me if necessary.
> >
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> >
> 
> 


  reply	other threads:[~2018-11-05  9:48 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-31 14:49 [dpdk-dev] [PATCH] Doc: add known issue for restricted vdev ethdev ops in secondary process Marvin Liu
2018-10-31  8:22 ` [dpdk-dev] Question about jumbo frame support on ixgbe Hideyuki Yamashita
2018-10-31 15:48   ` Stephen Hemminger
2018-11-01  3:27     ` Zhao1, Wei
2018-11-01  6:55     ` Zhao1, Wei
2018-11-01  3:10   ` Zhao1, Wei
2018-11-02  1:38     ` Hideyuki Yamashita
2018-11-05  9:47       ` Zhao1, Wei [this message]
2018-11-01  3:12   ` Zhao1, Wei
2018-11-22 18:07 ` [dpdk-dev] [PATCH] Doc: add known issue for restricted vdev ethdev ops in secondary process Mcnamara, John
2018-11-23  2:07   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A2573D2ACFCADC41BB3BE09C6DE313CA07E69049@PGSMSX103.gar.corp.intel.com \
    --to=wei.zhao1@intel.com \
    --cc=dev@dpdk.org \
    --cc=yamashita.hideyuki@po.ntt-tx.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).