DPDK patches and discussions
 help / color / mirror / Atom feed
From: Shahaf Shuler <shahafs@mellanox.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>,
	Mordechay Haimovsky <motih@mellanox.com>,
	"pascal.mazon@6wind.com" <pascal.mazon@6wind.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH V5 2/2] net/tap: use new Rx offloads API
Date: Thu, 15 Mar 2018 06:16:37 +0000	[thread overview]
Message-ID: <DB7PR05MB4426B48C314CFCBD0A554062C3D00@DB7PR05MB4426.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <457bcfb5-b2bc-8616-1cb4-c9b6fdeb0e57@intel.com>

Thursday, March 15, 2018 12:41 AM, Ferruh Yigit:
> On 3/14/2018 5:49 AM, Shahaf Shuler wrote:
> > Tuesday, March 13, 2018 1:57 PM, Ferruh Yigit:
> >>>
> >>> Again - the application should follow the API which currently
> >>> dictates how
> >> to set port offload. It is not depends on the rx_queue_offloads
> capabilities.
> >>> For example, PMD which don't support queue offloads can still have
> >> verification for the API that each port offload is set also on the
> >> queue offloads.
> >>
> >> I am not agree with this part, why to dictate application to set
> >> queue offloads if it already knows device doesn't support queue specific
> offloads?
> >
> > I agree we can make a small change in the API to not force the application
> to set the port offloads in the queue configuration. It makes sense.
> > The change will be:
> > "port offloads should be set on the port configuration. Queue offloads
> should be set on the queue configuration"
> 
> I am OK to this one, this is more reasonable for devices that support only port
> level offloads.
> 
> This looks like same as option #2 mentioned in the previous mails.
> 
> >
> >>
> >> In some of the existing PMD patches, to switch to new offloading API,
> >> PMD sets [rt]x_queue_offload_capa as same as [rt]x_offload_capa,
> >
> > Well this is just wrong. Unless those PMDs support all the offloads in a
> queue level.
> >
> > The logic is "every queue offload can be counted as port offload", because
> such offload can be set on each and every queue.
> > The other way around is not correct, port offload cannot be counted as
> queue offload.
> >
> > So if such PMDs has offloads which are supported only on the port level
> they cannot be declared as queue offloads.
> 
> Thanks for confirming, it would be great if you can help on the PMD new
> offload API patch reviews, to catch these kind of issues.

Sure, have me Cc in the patches so It can pass through my mailbox filters. 

> 
> >
> >
> >> in that case
> >> application can't know if queue specific offloads are supported or
> >> not and application may try to set queue offloads, this forces PMD to
> verify them.
> >>
> >> You confirmed [rt]x_queue_offload_capa is the way for application to
> >> know if device supports queue specific offloads or not. If these
> >> values always set to [rt]x_offload_capa, application losts this capability.
> >>
> >> Instead:
> >> - PMD that doesn't support queue specific offloads should set
> >> [rt]x_queue_offload_capa to 0
> >> - When [rt]x_queue_offload_capa is 0, application should be free to
> >> set queue offloads whatever it wants
> >
> > I don't agree, when queue_offload_capa is 0 the expected behavior from
> application is not to set any offload (if we do the change in the API that you
> are pushing to).
> > PMDs can verify it or not, but if capability is not set the application should
> not set the offload. This is how the API should be defined.
> 
> OK for this one.
> 
> >
> >> - When [rt]x_queue_offload_capa is 0, PMD should be free to verify
> >> queue offloads but most probably shouldn't verify them since we don't
> >> know what application will send.
> >>
> >> - When [rt]x_queue_offload_capa is != 0, applications should set
> >> queue offloads at least "[rt]x_queue_offload = [rt]x_offload"
> >
> > If we do the change you are pushing it is not needed.
> > Application will set the port offload in the port configuration, and the
> queue offload in the queue configuration.
> > No need to make special treatment based on the offloads_capa.
> 
> Right.
> 
> >
> >> - When [rt]x_queue_offload_capa is != 0, PMD should verify the queue
> >> offloads
> >>
> 
> 
> Back to initial question J, is tap supports queue level offloads?
> If not it shouldn't be reporting or checking queue offloads.
> 
> 
> Although it will be changed after above suggested change in API, I think
> check in existing tap queue_setup, also same in mlx5, is wrong.
> 
> tap_rxq_are_offloads_valid(struct rte_eth_dev *dev, uint64_t offloads) {
> 
>         uint64_t port_offloads = dev->data->dev_conf.rxmode.offloads;
>         uint64_t queue_supp_offloads = tap_rx_offload_get_queue_capa();
>         uint64_t port_supp_offloads = tap_rx_offload_get_port_capa();
> 
> 
> <...>
>         if ((port_offloads ^ offloads) & port_supp_offloads)
>                return false;
>         return true;
> 
> }
> 
> 
> take the example:
> port_supp_offloads = 11111
> port_offloads = 111
> queue_supp_offloads = 1111
> offloads = 1111
> 
> (port_offloads ^ offloads) & port_supp_offloads = 1000 Which will return
> false.
> 
> This only works if "port_offloads == offloads" which is practically only
> supporting port level offloads.

For mlx5, the port_supp_offloads is internal function which returns **only** the pure port offloads (the port offloads in dev_info are rx_offload_get_queue_capa() | rx_offload_get_port_capa())
That is, offload cannot be in both port and queue offload. So the scenario above is not feasible. 



  reply	other threads:[~2018-03-15  6:16 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-04 19:18 [dpdk-dev] [PATCH V3 0/2] net/tap: convert to new ethdev " Moti Haimovsky
2018-01-04 19:18 ` [dpdk-dev] [PATCH V3 1/2] net/tap: convert to new Tx " Moti Haimovsky
2018-01-05  8:18   ` Pascal Mazon
2018-01-04 19:18 ` [dpdk-dev] [PATCH V3 2/2] net/tap: convert to new Rx " Moti Haimovsky
2018-01-05  8:26   ` Pascal Mazon
2018-01-10 16:20   ` [dpdk-dev] [PATCH V4 1/2] net/tap: convert to new Tx " Moti Haimovsky
2018-01-10 16:20     ` [dpdk-dev] [PATCH V4 2/2] net/tap: convert to new Rx " Moti Haimovsky
2018-01-10 16:42       ` Pascal Mazon
2018-01-17 14:04       ` [dpdk-dev] [PATCH V5 1/2] net/tap: use new Tx " Moti Haimovsky
2018-01-17 14:04         ` [dpdk-dev] [PATCH V5 2/2] net/tap: use new Rx " Moti Haimovsky
2018-03-02 21:44           ` Ferruh Yigit
2018-03-12 14:20             ` Shahaf Shuler
2018-03-12 16:59               ` Ferruh Yigit
2018-03-12 17:58                 ` Shahaf Shuler
2018-03-12 19:05                   ` Ferruh Yigit
2018-03-13  7:08                     ` Shahaf Shuler
2018-03-13 11:56                       ` Ferruh Yigit
2018-03-14  5:49                         ` Shahaf Shuler
2018-03-14 22:40                           ` Ferruh Yigit
2018-03-15  6:16                             ` Shahaf Shuler [this message]
2018-03-15 14:34                               ` Ferruh Yigit
2018-01-18 14:02         ` [dpdk-dev] [PATCH V5 1/2] net/tap: use new Tx " Pascal Mazon
2018-01-18 15:19           ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB7PR05MB4426B48C314CFCBD0A554062C3D00@DB7PR05MB4426.eurprd05.prod.outlook.com \
    --to=shahafs@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=motih@mellanox.com \
    --cc=pascal.mazon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).