From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Yigit, Ferruh" <ferruh.yigit@intel.com>,
Dumitru Ceara <dceara@redhat.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Richardson, Bruce" <bruce.richardson@intel.com>
Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
Date: Mon, 28 Sep 2020 13:10:18 +0000 [thread overview]
Message-ID: <BYAPR11MB3301B1B70259C60A2471E0679A350@BYAPR11MB3301.namprd11.prod.outlook.com> (raw)
In-Reply-To: <1d1c2d4a-ecee-db54-9790-961c143363df@intel.com>
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, September 28, 2020 1:43 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Dumitru Ceara <dceara@redhat.com>; dev@dpdk.org
> Cc: Richardson, Bruce <bruce.richardson@intel.com>
> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
>
> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
> >> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
> >>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
> >>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
> >>>>> Even though ring interfaces don't support any other TX/RX offloads they
> >>>>> do support sending multi segment packets and this should be advertised
> >>>>> in order to not break applications that use ring interfaces.
> >>>>>
> >>>>
> >>>> Does ring PMD support sending multi segmented packets?
> >>>>
> >>>
> >>> Yes, sending multi segmented packets works fine with ring PMD.
> >>>
> >>
> >> Define "works fine" :)
> >>
> >> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case
> >> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and
> >> the ones doesn't support?
> >>
> >> If the traffic is only from ring PMD to ring PMD, you won't recognize the
> >> difference between segmented or not-segmented mbufs, and it will look like
> >> segmented packets works fine.
> >> But if there is other PMDs involved in the forwarding, or if need to process the
> >> packets, will it still work fine?
> >>
> >>>> As far as I can see ring PMD doesn't know about the mbuf segments.
> >>>>
> >>>
> >>> Right, the PMD doesn't care about the mbuf segments but it implicitly
> >>> supports sending multi segmented packets. From what I see it's actually
> >>> the case for most of the PMDs, in the sense that most don't even check
> >>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
> >>> segment packets they are just accepted.
> >> >
> >>
> >> As far as I can see, if the segmented packets sent, the ring PMD will put the
> >> first mbuf into the ring without doing anything specific to the next segments.
> >>
> >> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the
> >> segmented packets and put each chained mbuf into the separate field in the ring.
> >
> > Hmm, wonder why do you think this is necessary?
> > From my perspective current behaviour is sufficient for TX-ing multi-seg packets
> > over the ring.
> >
>
> I was thinking based on what some PMDs already doing, but right ring may not
> need to do it.
>
> Also for the case, one application is sending multi segmented packets to the
> ring, and other application pulling packets from the ring and sending to a PMD
> that does NOT support the multi-seg TX. I thought ring PMD claiming the
> multi-seg Tx support should serialize packets to support this case, but instead
> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing the
> responsibility to the application.
>
> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' &
> 'DEV_RX_OFFLOAD_SCATTER', what do you think?
Seems so...
Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here,
if DEV_RX_OFFLOAD_SCATTER was not specified?
>
> >>
> >>>
> >>> However, the fact that the ring PMD doesn't advertise this implicit
> >>> support forces applications that use ring PMD to have a special case for
> >>> handling ring interfaces. If the ring PMD would advertise
> >>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
> >>> to the type of underlying interface.
> >>>
> >>
> >> This is not handling the special case for the ring PMD, this is why he have the
> >> offload capability flag. Application should behave according capability flags,
> >> not per specific PMD.
> >>
> >> Is there any specific usecase you are trying to cover?
next prev parent reply other threads:[~2020-09-28 13:10 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-18 10:36 Dumitru Ceara
2020-09-22 14:21 ` Ferruh Yigit
2020-09-28 7:31 ` Dumitru Ceara
2020-09-28 10:25 ` Ferruh Yigit
2020-09-28 11:00 ` Ananyev, Konstantin
2020-09-28 12:42 ` Ferruh Yigit
2020-09-28 13:10 ` Ananyev, Konstantin [this message]
2020-09-28 13:26 ` Ferruh Yigit
2020-09-28 13:58 ` Dumitru Ceara
2020-09-28 15:02 ` Ferruh Yigit
2020-09-28 11:01 ` Bruce Richardson
2020-09-28 12:45 ` Ferruh Yigit
2020-09-28 18:47 ` [dpdk-dev] [PATCH v2] net/ring: advertise multi segment TX and scatter RX Dumitru Ceara
2020-09-29 8:37 ` Bruce Richardson
2020-09-30 17:04 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BYAPR11MB3301B1B70259C60A2471E0679A350@BYAPR11MB3301.namprd11.prod.outlook.com \
--to=konstantin.ananyev@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dceara@redhat.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).