DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: Bill Fischofer <bill.fischofer@linaro.org>
Cc: <dev@dpdk.org>, <thomas.monjalon@6wind.com>,
	<bruce.richardson@intel.com>,  <narender.vangati@intel.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>, <gage.eads@intel.com>
Subject: Re: [dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK
Date: Fri, 14 Oct 2016 14:56:09 +0530	[thread overview]
Message-ID: <20161014092607.GA16828@localhost.localdomain> (raw)
In-Reply-To: <CAKb83kahuY-m52QEsTbL6aPAQxsQ1qPyWcphRienRQiaipVnnA@mail.gmail.com>

On Thu, Oct 13, 2016 at 11:14:38PM -0500, Bill Fischofer wrote:
> Hi Jerin,

Hi Bill,

Thanks for the review.

[snip]
> > + * If the device init operation is successful, the correspondence between
> > + * the device identifier assigned to the new device and its associated
> > + * *rte_event_dev* structure is effectively registered.
> > + * Otherwise, both the *rte_event_dev* structure and the device
> > identifier are
> > + * freed.
> > + *
> > + * The functions exported by the application Event API to setup a device
> > + * designated by its device identifier must be invoked in the following
> > order:
> > + *     - rte_event_dev_configure()
> > + *     - rte_event_queue_setup()
> > + *     - rte_event_port_setup()
> > + *     - rte_event_port_link()
> > + *     - rte_event_dev_start()
> > + *
> > + * Then, the application can invoke, in any order, the functions
> > + * exported by the Event API to schedule events, dequeue events, enqueue
> > events,
> > + * change event queue(s) to event port [un]link establishment and so on.
> > + *
> > + * Application may use rte_event_[queue/port]_default_conf_get() to get
> > the
> > + * default configuration to set up an event queue or event port by
> > + * overriding few default values.
> > + *
> > + * If the application wants to change the configuration (i.e. call
> > + * rte_event_dev_configure(), rte_event_queue_setup(), or
> > + * rte_event_port_setup()), it must call rte_event_dev_stop() first to
> > stop the
> > + * device and then do the reconfiguration before calling
> > rte_event_dev_start()
> > + * again. The schedule, enqueue and dequeue functions should not be
> > invoked
> > + * when the device is stopped.
> >
> 
> Given this requirement, the question is what happens to events that are "in
> flight" at the time rte_event_dev_stop() is called? Is stop an asynchronous
> operation that quiesces the event _dev and allows in-flight events to drain
> from queues/ports prior to fully stopping, or is some sort of separate
> explicit quiesce mechanism required? If stop is synchronous and simply
> halts the event_dev, then how is an application to know if subsequent
> configure/setup calls would leave these pending events with no place to
> stand?
>

>From an application API perspective rte_event_dev_stop() is a synchronous function.
If the stop has been called for re-configuring the number of queues, ports etc of
the device, then "in flight" entry preservation will be implementation defined.
else "in flight" entries will be preserved.

[snip]
 
> > +extern int
> > +rte_event_dev_socket_id(uint8_t dev_id);
> > +
> > +/* Event device capability bitmap flags */
> > +#define RTE_EVENT_DEV_CAP_QUEUE_QOS        (1 << 0)
> > +/**< Event scheduling prioritization is based on the priority associated
> > with
> > + *  each event queue.
> > + *
> > + *  \see rte_event_queue_setup(), RTE_EVENT_QUEUE_PRIORITY_NORMAL
> > + */
> > +#define RTE_EVENT_DEV_CAP_EVENT_QOS        (1 << 1)
> > +/**< Event scheduling prioritization is based on the priority associated
> > with
> > + *  each event. Priority of each event is supplied in *rte_event*
> > structure
> > + *  on each enqueue operation.
> > + *
> > + *  \see rte_event_enqueue()
> > + */
> > +
> > +/**
> > + * Event device information
> > + */
> > +struct rte_event_dev_info {
> > +       const char *driver_name;        /**< Event driver name */
> > +       struct rte_pci_device *pci_dev; /**< PCI information */
> > +       uint32_t min_dequeue_wait_ns;
> > +       /**< Minimum supported global dequeue wait delay(ns) by this
> > device */
> > +       uint32_t max_dequeue_wait_ns;
> > +       /**< Maximum supported global dequeue wait delay(ns) by this
> > device */
> > +       uint32_t dequeue_wait_ns;
> >
> 
> Am I reading this correctly that there is no way to support an indefinite
> waiting capability? Or is this just saying that if a timed wait is
> performed there are min/max limits for the wait duration?

Application can wait indefinite if required. see
RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT configuration option.

Trivial application may not need different wait values on each dequeue.This is
a performance optimization opportunity for implementation.

> 
> 
> > +       /**< Configured global dequeue wait delay(ns) for this device */
> > +       uint8_t max_event_queues;
> > +       /**< Maximum event_queues supported by this device */
> > +       uint32_t max_event_queue_flows;
> > +       /**< Maximum supported flows in an event queue by this device*/
> > +       uint8_t max_event_queue_priority_levels;
> > +       /**< Maximum number of event queue priority levels by this device.
> > +        * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > +        */
> > +       uint8_t nb_event_queues;
> > +       /**< Configured number of event queues for this device */
> >
> 
> Is 256 a sufficient number of queues? While various SoCs may have limits,
> why impose such a small limit architecturally?

Each event queue potentially can support millions of flows.That way, 256 may not be a
small limit.The reason to choose queue_id as 8bit to hold all the attribute
of "struct rte_event" in 128bit. So that SIMD optimization is possible
in the implementation.

> > +       /**< The maximum number of active flows this queue can track at any
> > +        * given time. The value must be in the range of
> > +        * [1 - max_event_queue_flows)] which previously supplied
> > +        * to rte_event_dev_configure().
> > +        */
> > +       uint32_t nb_atomic_order_sequences;
> > +       /**< The maximum number of outstanding events waiting to be
> > (egress-)
> > +        * reordered by this queue. In other words, the number of entries
> > in
> > +        * this queue’s reorder buffer.The value must be in the range of
> > +        * [1 - max_event_queue_flows)] which previously supplied
> > +        * to rte_event_dev_configure().
> >
> 
> What happens if this limit is exceeded? While atomic limits are bounded by
> the number of lcores, the same cannot be said for ordered queues.
> Presumably the queue would refuse further dequeues once this limit is
> reached until pending reorders are resolved to permit continued processing?
> If so that should be stated explicitly.

OK. I will update details.

> 
> 
> > + *
> > + *
> > + * The device start step is the last one and consists of setting the event
> > + * queues to start accepting the events and schedules to event ports.
> > + *
> > + * On success, all basic functions exported by the API (event enqueue,
> > + * event dequeue and so on) can be invoked.
> > + *
> > + * @param dev_id
> > + *   Event device identifier
> > + * @return
> > + *   - 0: Success, device started.
> > + *   - <0: Error code of the driver device start function.
> > + */
> > +extern int
> > +rte_event_dev_start(uint8_t dev_id);
> > +
> > +/**
> > + * Stop an event device. The device can be restarted with a call to
> > + * rte_event_dev_start()
> > + *
> > + * @param dev_id
> > + *   Event device identifier.
> > + */
> > +extern void
> > +rte_event_dev_stop(uint8_t dev_id);
> >
> 
> Having this be a void function implies this function cannot fail. Is that
> assumption always correct?

Yes. Subsequent rte_event_dev_start() can return error if the implementation
really have some critical issues on starting the device.

> 
> 
> > +
> > +/**
> > + * Close an event device. The device cannot be restarted!
> > + *
> > + * @param dev_id
> > + *   Event device identifier
> > + *
> > + * @return
> > + *  - 0 on successfully closing device
> > + *  - <0 on failure to close device
> > + */
> > +extern int

  reply	other threads:[~2016-10-14  9:26 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-04 21:49 [dpdk-dev] [RFC] " Vangati, Narender
2016-10-05  7:24 ` Jerin Jacob
2016-10-07 10:40   ` Hemant Agrawal
2016-10-09  8:27     ` Jerin Jacob
2016-10-11 19:30   ` [dpdk-dev] [RFC] [PATCH v2] " Jerin Jacob
2016-10-14  4:14     ` Bill Fischofer
2016-10-14  9:26       ` Jerin Jacob [this message]
2016-10-14 10:30         ` Hemant Agrawal
2016-10-14 12:52           ` Jerin Jacob
2016-10-14 15:00     ` Eads, Gage
2016-10-17  4:18       ` Jerin Jacob
2016-10-17 20:26         ` Eads, Gage
2016-10-18 11:19           ` Jerin Jacob
2016-10-14 16:02     ` Bruce Richardson
2016-10-17  5:10       ` Jerin Jacob
2016-10-25 17:49     ` Jerin Jacob
2016-10-26 12:11       ` Van Haaren, Harry
2016-10-26 12:24         ` Jerin Jacob
2016-10-26 12:54           ` Bruce Richardson
2016-10-28  3:01             ` Jerin Jacob
2016-10-28  8:36               ` Bruce Richardson
2016-10-28  9:06                 ` Jerin Jacob
2016-11-02 11:25                   ` Jerin Jacob
2016-11-02 11:35                     ` Bruce Richardson
2016-11-02 13:09                       ` Jerin Jacob
2016-11-02 13:56                         ` Bruce Richardson
2016-11-02 14:54                           ` Jerin Jacob
2016-10-26 18:37         ` Vincent Jardin
2016-10-28 13:10           ` Van Haaren, Harry
2016-11-02 10:47         ` Jerin Jacob
2016-11-02 11:45           ` Bruce Richardson
2016-11-02 12:34             ` Jerin Jacob
2016-10-26 12:43       ` Bruce Richardson
2016-10-26 17:30         ` Jerin Jacob
2016-10-28 13:48       ` Van Haaren, Harry
2016-10-28 14:16         ` Bruce Richardson
2016-11-02  8:59           ` Jerin Jacob
2016-11-02  8:06         ` Jerin Jacob
2016-11-02 11:48           ` Bruce Richardson
2016-11-02 12:57             ` Jerin Jacob
2016-10-14 15:00 Francois Ozog

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161014092607.GA16828@localhost.localdomain \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=bill.fischofer@linaro.org \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=narender.vangati@intel.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).