DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ma, Liang" <liang.j.ma@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Van Haaren, Harry" <harry.van.haaren@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"Jain, Deepak K" <deepak.k.jain@intel.com>,
	"Mccarthy, Peter" <peter.mccarthy@intel.com>
Subject: Re: [dpdk-dev] [RFC PATCH 0/7] RFC:EventDev OPDL PMD
Date: Wed, 29 Nov 2017 17:15:12 +0000	[thread overview]
Message-ID: <20171129171512.GA30238@sivswdev01.ir.intel.com> (raw)
In-Reply-To: <20171129125605.GA24298@jerin>

On 29 Nov 04:56, Jerin Jacob wrote:
> -----Original Message-----
> > Date: Wed, 29 Nov 2017 12:19:54 +0000
> > From: "Ma, Liang" <liang.j.ma@intel.com>
> > To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > CC: dev@dpdk.org, "Van Haaren, Harry" <harry.van.haaren@intel.com>,
> >  "Richardson, Bruce" <bruce.richardson@intel.com>, "Jain, Deepak K"
> >  <deepak.k.jain@intel.com>, "Mccarthy, Peter" <peter.mccarthy@intel.com>
> > Subject: Re: [RFC PATCH 0/7] RFC:EventDev OPDL PMD
> > User-Agent: Mutt/1.5.20 (2009-06-14)
> > 
> > Hi Jerin,
> >    Many thanks for your comments. Please check my comment below. 
> > 
> > On 25 Nov 02:25, Jerin Jacob wrote:
> > > -----Original Message-----
> > > > Date: Fri, 24 Nov 2017 11:23:45 +0000
> > > > From: liang.j.ma@intel.com
> > > > To: jerin.jacob@caviumnetworks.com
> > > > CC: dev@dpdk.org, harry.van.haaren@intel.com, bruce.richardson@intel.com,
> > > >  deepak.k.jain@intel.com, john.geary@intel.com
> > > > Subject: [RFC PATCH 0/7] RFC:EventDev OPDL PMD
> > > > X-Mailer: git-send-email 2.7.5
> > > > 
> > > > From: Liang Ma <liang.j.ma@intel.com>
> > > 
> > > 
> > > Thanks Liang Ma for the RFC.
> > > 
> > > > 
> > > > The OPDL (Ordered Packet Distribution Library) eventdev is a specific
> > > > implementation of the eventdev API. It is particularly suited to packet
> > > > processing workloads that have high throughput and low latency 
> > > > requirements. All packets follow the same path through the device.
> > > > The order which packets  follow is determinted by the order in which
> > > > queues are set up. Packets are left on the ring until they are transmitted.
> > > > As a result packets do not go out of order.
> > > > 
> > > > Features:
> > > > 
> > > > The OPDL eventdev implements a subset of features of the eventdev API;
> > > > 
> > > > Queues
> > > >  * Atomic
> > > >  * Ordered (Parallel is supported as parallel is a subset of Ordered)
> > > >  * Single-Link
> > > > 
> > > > Ports
> > > >  * Load balanced (for Atomic, Ordered, Parallel queues)
> > > >  * Single Link (for single-link queues)
> > > > 
> > > > Single Port Queue
> > > > 
> > > > It is possible to create a Single Port Queue 
> > > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK. Packets dequeued from this queue do
> > > > not need to be re-enqueued (as is the case with an ordered queue). The 
> > > > purpose of this queue is to allow for asynchronous handling of packets in 
> > > > the middle of a pipeline. Ordered queues in the middle of a pipeline 
> > > > cannot delete packets.
> > > > 
> > > > 
> > > > Queue Dependencies
> > > > 
> > > > As stated the order in which packets travel through queues is static in
> > > > nature. They go through the queues in the order the queues are setup at
> > > > initialisation rte_event_queue_setup(). For example if an application
> > > > sets up 3 queues, Q0, Q1, Q2 and has 3 assoicated ports P0, P1, P2 and 
> > > > P3 then packets must be
> > > > 
> > > >  * Enqueued onto Q0 (typically through P0), then
> > > > 
> > > >  * Dequeued from Q0 (typically through P1), then
> > > > 
> > > >  * Enqueued onto Q1 (also through P1), then
> > > > 
> > > >  * Dequeued from Q2 (typically through P2),  then
> > > > 
> > > >  * Enqueued onto Q3 (also through P2), then
> > > > 
> > > >  * Dequeued from Q3 (typically through P3) and then transmitted on the 
> > > >    relevant eth port
> > > > 
> > > > 
> > > > Limitations
> > > > 
> > > > The opdl implementation has a number of limitations. These limitations are
> > > > due to the static nature of the underlying queues. It is because of this
> > > > that the implementation can achieve such high throughput and low latency
> > > > 
> > > > The following list is a comprehensive outline of the what is supported and
> > > > the limitations / restrictions imposed by the opdl pmd
> > > > 
> > > >  - The order in which packets moved between queues is static and fixed 
> > > >    (dynamic scheduling is not supported).
> > > > 
> > > >  - NEW, RELEASE op type are not explicitly supported. RX (first enqueue) 
> > > >    implicitly adds NEW event types, and TX (last dequeue) implicitly does
> > > >    RELEASE event types.
> > > > 
> > > >  - All packets follow the same path through device queues.
> > > > 
> > > >  - Flows within queues are NOT supported.
> > > > 
> > > >  - Event priority is NOT supported.
> > > > 
> > > >  - Once the device is stopped all inflight events are lost. Applications should 
> > > >    clear all inflight events before stopping it.
> > > > 
> > > >  - Each port can only be associated with one queue.
> > > > 
> > > >  - Each queue can have multiple ports associated with it.
> > > > 
> > > >  - Each worker core has to dequeue the maximum burst size for that port.
> > > > 
> > > >  - For performance, the rte_event flow_id should not be updated once 
> > > >     packet is enqueued on RX.
> > > 
> > > Some top-level comments,
> > > 
> > > # How does application knows this PMD has above limitations?
> > > 
> > > I think, We need to add more capability RTE_EVENT_DEV_CAP_*
> > > to depict these constraints. On the same note, I believe this
> > > PMD is "radically" different than other SW/HW PMD then anyway
> > > we cannot write the portable application using this PMD. So there
> > > is no point in abstracting it as eventdev PMD. Could you please
> > > work on the new capabilities are required to enable this PMD.
> > > If it needs more capability flags to express this PMD capability,
> > > we might have a different library for this as it defects the
> > > purpose of portable eventdev applications.
> > >
> > Agree with improve capability information with add more details with 
> > RTE_EVENT_DEV_CAP_*. While the OPDL is designed around a different 
> 
> Please submit patches required for new caps required for this PMD to
> depict the constraints. That is the only way application can know 
> the constraints for the given PMD.
> 
I will work on capability issue and submit V2 patches when that's ready.
> > load-balancing architecture, that of load-balancing across pipeline 
> > stages where a consumer is only working on a single stage, this does not 
> > necessarily mean that it is completely incompatible with other eventdev 
> > implementations. Although, it is true that an application written to use 
> > one of the existing eventdevs probably won't work nicely with the OPDL
> > eventdev, the converse situation should work ok. That is, an application
> > written as a pipeline using the OPDL eventdev for load balancing should 
> > work without changes with the generic SW implementation, and there should 
> > be no reason why it should not also work with other HW implementations 
> > in DPDK too. 
> > OPDL PMD implement a subset functionality of eventdev API. I demonstrate 
> > OPDL on this year PRC DPDK summit,  got some early feedback from potential
> > users. Most of them would like to use that under existing API(aka eventdev) 
> > rather than another new API/lib. That let potential user easier to swap to 
> > exist SW/HW eventdev PMD.
> 
> Perfect. Lets have one application then so it will it make easy to swap
> SW/HW eventdev PMD.
> 
> > 
> > > # We should not add yet another "PMD" specific example application
> > > in example area like "examples/eventdev_pipeline_opdl_pmd". We are
> > > working on making examples/eventdev/pipeline_sw_pmd to make work
> > > on both HW and SW.
> > > 
> > We would agree here that we don't need a proliferation of example applications.
> > However this is a different architecture (not a dynamic packet scheduler rather
> > a static pipeline work distributer), and as such perhaps we should have a 
> > sample app that demonstrates each contrasting architecture.
> 
> I agree. We need sample application. Why not change the exiting
> examples/eventdev/pipeline_sw_pmd to make it work as we are addressing the
> pipeling here. Let write the application based on THE USE CASE not
> specific to PMD. PMD specific application won't scale.
> 
I perfer to pending upstream OPDL example code in this patches set. 
it's better to upstream/merge example code in another track.
> > 
> > > # We should not add new PMD specific test cases in
> > > test/test/test_eventdev_opdl.c area.I think existing PMD specific
> > > test case can be moved to respective driver area, and it can do 
> > > the self-test by passing some command line arguments to vdev.
> > > 
> > We simply followed the existing test structure here. Would it be confusing to 
> > have another variant of example test code, is this done anywhere else? 
> > Also would there be a chance that DTS would miss running tests or not like 
> > having to run them using a different method. However we would defer to the consensus here.
> > Could you elaborate on your concerns with having another test file in the test area ?
> 
> PMD specific test cases wont scale. It defect the purpose of the common
> framework. Cryptodev fell into that trap earlier then they fixed it.
> For DTS case, I think, still it can verified through vdev command line
> arguments to the new PMD. What do you think?
> 
Agree, I would like to intergrate the test code with PMD, but any API is avaiable 
for self test purpose ? I didn't find existing api support self test. any hints ?

> > 
> > > # Do you have relative performance number with exiting SW PMD?
> > > Meaning how much it improves for any specific use case WRT exiting
> > > SW PMD. That should a metric to define the need for new PMD.
> > > 
> > Yes, we definitely has the number. Given the limitation(Ref cover letter), OPDL 
> > can achieve 3X-5X times schedule rate(on Xeon 2699 v4 platform) compare with the 
> > standard SW PMD and no need of schedule core. This is the core value of OPDL PMD.
> > For certain user case, "static pipeline"  "strong order ",  OPDL is very useful 
> > and efficient and generic to processor arch.
> 
> Sounds good.
> 
> > 
> > > # There could be another SW driver from another vendor like ARM.
> > > So, I think, it is important to define the need for another SW
> > > PMD and how much limitation/new capabilities it needs to define to
> > > fit into the eventdev framework,
> > >
> > Put a summary here, OPDL is designed for certain user case, performance is increase 
> > dramatically. Also OPDL can fallback to standard SW PMD seamless.
> > That definitely fit into the eventdev API
> > 

  reply	other threads:[~2017-11-29 17:15 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-24 11:23 liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 1/7] event/opdl: add the opdl ring infrastructure library liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 2/7] event/opdl: add the opdl pmd header and init helper function liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 3/7] event/opdl: add the opdl pmd main body and xstats " liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 4/7] event/opdl: update the build system to enable compilation of pmd liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 5/7] test/eventdev: opdl eventdev pmd unit test func and makefiles liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 6/7] examples/eventdev_pipeline_opdl: adding example liang.j.ma
2017-11-24 11:23 ` [dpdk-dev] [PATCH 7/7] doc: add eventdev opdl pmd docuement and example usage document liang.j.ma
2017-11-24 20:55 ` [dpdk-dev] [RFC PATCH 0/7] RFC:EventDev OPDL PMD Jerin Jacob
2017-11-29 12:19   ` Ma, Liang
2017-11-29 12:56     ` Jerin Jacob
2017-11-29 17:15       ` Ma, Liang [this message]
2017-11-30 17:41         ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171129171512.GA30238@sivswdev01.ir.intel.com \
    --to=liang.j.ma@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=deepak.k.jain@intel.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=peter.mccarthy@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).