DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: "Rao, Nikhil" <nikhil.rao@intel.com>
Cc: hemant.agrawal@nxp.com, dev@dpdk.org, narender.vangati@intel.com,
	abhinandan.gujjar@intel.com, gage.eads@intel.com
Subject: Re: [dpdk-dev] [RFC] eventdev: event tx adapter APIs
Date: Sun, 10 Jun 2018 17:42:57 +0530	[thread overview]
Message-ID: <20180610121256.GA4792@jerin> (raw)
In-Reply-To: <df833c5f-03f3-b035-8fb4-523d1c7bd2d7@intel.com>

-----Original Message-----
> Date: Tue, 5 Jun 2018 14:54:58 +0530
> From: "Rao, Nikhil" <nikhil.rao@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: hemant.agrawal@nxp.com, dev@dpdk.org, narender.vangati@intel.com,
>  abhinandan.gujjar@intel.com, gage.eads@intel.com, nikhil.rao@intel.com
> Subject: Re: [RFC] eventdev: event tx adapter APIs
> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
>  Thunderbird/52.8.0
> 
> On 6/4/2018 10:41 AM, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Fri, 1 Jun 2018 23:47:00 +0530
> > > From: "Rao, Nikhil" <nikhil.rao@intel.com>
> > > To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > CC: hemant.agrawal@nxp.com, dev@dpdk.org, narender.vangati@intel.com,
> > >   abhinandan.gujjar@intel.com, gage.eads@intel.com, nikhil.rao@intel.com
> > > Subject: Re: [RFC] eventdev: event tx adapter APIs
> > > User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
> > >   Thunderbird/52.8.0
> > > 
> > > 
> > > Hi Jerin,
> > 
> > 
> > > The workers invoke rte_event_enqueue_burst() to their local port not to the
> > > extra port as you described. The queue ID specified when
> > > enqueuing is linked to the the adapter's port, the adapter reads these
> > > events and transmits mbufs on the
> > > ethernet port and queue specified in these mbufs. The diagram below
> > > illustrates what I just described.
> > > 
> > > +------+
> > > |      |   +----+
> > > |Worker+-->+port+--+
> > > |      |   +----+  |                                         +----+
> > > +------+           |                                     +-->+eth0|
> > >                     |  +---------+            +-------+   |   +----+
> > >                     +--+         |   +----+   |       +---+   +----+
> > >                        |  Queue  +-->+port+-->+Adapter|------>+eth1|
> > >                     +--+         |   +----+   |       +---+   +----+
> > > +------+           |  +---------+            +-------+   |   +----+
> > > |      |   +----+  |                                     +-->+eth2|
> > > |Worker+-->+port+--+                                         +----+
> > > |      |   +----+
> > > +------+
> > 
> > 
> > Makes sense. One suggestion here, Since we have ALL type queue and
> > normal queues, Can we move the queue change or sched_type change code
> > from the application and move that down to function pointer abstraction(any
> > way adapter knows which queues to enqueue for), that way we can have same
> > final stage code for ALL type queues and Normal queues.
> > 
> Yes, I see the queue/sched type change approach followed in
> pipeline_worker_tx.c, a queue id can be provided in
> rte_event_eth_tx_adapter_conf
> 
> +struct rte_event_eth_tx_adapter_conf {
> +	uint8_t event_port_id;
> +	/**< Event port identifier, the adapter dequeues mbuf events from this
> +	 * port.
> +	 */
> +	uint16_t tx_metadata_off;
> +	/**<  Offset of struct rte_event_eth_tx_adapter_meta in the private
> +	 * area of the mbuf
> +	 */
> +	uint32_t max_nb_tx;
> +	/**< The adapter can return early if it has processed at least
> +	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
> +	 * cause the adapter to process more than max_nb_tx mbufs.
> +	 */
> +};
> 
> </sniped>
> 
> > > The worker core will receive events pointing to mbufs that need to be
> > > transmitted to different
> > > ports/queues, as described above. The port and the queue will be populated
> > > in the mbuf and the
> > > API can be as below
> > > 
> > > uint16_t rte_event_eth_tx_adapter_enqueue(uint8_t instance_id, uint8_t event_port_id, const struct rte_event ev[], uint16_t nb_events);
> > > 
> > > Let me know if that works for you.
> > 
> > Yes. That API works for me. I think, we can leverage "struct
> > rte_eventdev" area for adding new function pointer. Just like
> > enqueue_new_burst, enqueue_forward_burst variants, we can add one more
> > there, so that we can reuse that hot cacheline for all fastpath function pointer case.
> > That would translate to adding "uint8_t dev_id" on the above API.

> The dev_id can be derived from the instance_id, does that work ?

Do we need to that in fastpath?, IMO, if you can do that in slow path then it is fine.

> 
> I need some clarification on the configuration API/flow. The
> eventdev_pipeline sample app checks if DEV_TX_OFFLOAD_MT_LOCKFREE flag is
> set on all ethernet devices and if so, uses the pipeline_worker_tx path as
> opposed to the "consumer" function,

Yes

> if we were to use the adapter to replace
> some of the sample code then it seems like the
> RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is hardware assist for the
> pipeline worker tx mode, the adapter would support 2 modes (consumer and
> worker_tx, borrowing terminology from the sample), worker_tx would only be
> supported if the eventdev supports
> RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT (at least in the first version)

Yes.

1) I think, rte_event_eth_tx_adapter_enqueue() function can simply call,

struct rte_eventdev *dev = &rte_eventdevs[dev_id];
return (*dev->eth_tx_adapter_enqueue)(...);

2) You can expose generic version of "eth_tx_adapter_enqueue" in Tx
adapter. If drivers does not set the "eth_tx_adapter_enqueue" function
pointer or DEV_TX_OFFLOAD_MT_LOCKFREE flag is NOT set on all ethernet devices
_then_ in common code we can assign eth_tx_adapter_enqueue function
pointer as your generic Tx adapter function pointer.

3) I think, you can focus only on generic "consumer" case as you can not
test "worker_tx" case. We are planning to add more optimized raw
"worker_tx" case in driver(Point 2 will allow that by having driver
specific "eth_tx_adapter_enqueue" function pointer).

/Jerin

> 
> Thanks,
> Nikhil
> 

  parent reply	other threads:[~2018-06-10 12:13 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-25 15:08 Nikhil Rao
2018-05-30  7:26 ` Jerin Jacob
2018-06-01 18:17   ` Rao, Nikhil
2018-06-04  5:11     ` Jerin Jacob
2018-06-05  9:24       ` Rao, Nikhil
2018-06-10 12:05         ` Jerin Jacob
2018-06-10 12:12         ` Jerin Jacob [this message]
2018-06-12 21:32 ` [dpdk-dev] [RFC v2] " Nikhil Rao
2018-06-14 12:09   ` Rao, Nikhil
2018-06-17 11:09   ` Jerin Jacob
2018-06-18 12:10     ` Rao, Nikhil

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180610121256.GA4792@jerin \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=abhinandan.gujjar@intel.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=narender.vangati@intel.com \
    --cc=nikhil.rao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).