DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Rao, Nikhil" <nikhil.rao@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Cc: gage.eads@intel.com, dev@dpdk.org, thomas@monjalon.net,
	bruce.richardson@intel.com, harry.van.haaren@intel.com,
	hemant.agrawal@nxp.com, nipun.gupta@nxp.com,
	narender.vangati@intel.com,
	Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Subject: Re: [dpdk-dev] [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues
Date: Mon, 10 Jul 2017 11:44:10 +0530	[thread overview]
Message-ID: <3d2d78cc-9572-bf95-6d25-9b350da62827@intel.com> (raw)
In-Reply-To: <20170707155707.GA6245@jerin>

Hi Jerin,

thanks for your feedback, further comments below.

On 7/7/2017 9:27 PM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Fri, 7 Jul 2017 20:33:19 +0530
>> From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>> /* adapter has inbuilt port, no need to create producer port */
>>>> #define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT  (1ULL << 0)
>>>> /* adapter does not need service function */
>>>> #define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1)
>>
>> I just provided a name to share the view. You can choose better name.
>>
OK.

>>>> int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, uint8_t
>>> eth_port_id);
>>>

I also think that the application should be able to call create() with > 
1 ports. This would allow a single service to poll multiple NICs with 
WRR priority.

The side effect of is that the queue add/del APIs would need to specify 
the port_id as well.

>>>> int rte_event_eth_rx_adapter_get_info(uint8_t id, struct
>>> rte_event_eth_rx_adap_info *info);
>>>> int rte_event_eth_rx_adapter_configure(uint8_t id, struct
>>> rte_event_eth_rx_adap_config *cfg);
>>>> int rte_event_eth_rx_adapter_queue_add(uint8_t id, int32_t rx_queue_id,
>>> const struct rte_eth_rx_event_adapter_queue_config *config);
>>>> int rte_event_eth_rx_adapter_queue_del(uint8_t id, int32_t rx_queue_id)

>>>> int rte_event_eth_rx_adapter_run();

The adapter's run function is not part of the public interface, agree ?
>>>> int rte_event_eth_rx_adapter_free(uint8_t id);
>>>>
>>>
>>> If I understood your idea, the function pointer struct would look something
>>> like this.
>>>
>>> struct rte_eventdev_rx_adapter_ops {
>>> 	rx_adapter_create_t 	create,
>>> 	rx_adapter_get_info_t	info,
>>> 	rx_adapter_configure_t	configure,
>>> 	rx_adapter_queue_add_t  queue_add,
>>> 	rx_adapter_queue_del_t	queue_del,
>>> 	rx_adapter_queue_free_t	queue_free,
>>>        rx_adapter_free_t	free
>>> };
>>
>> Yes. But, I think, adapter create and free goes in struct rte_eventdev_op .
>> See below.
>>
>>
>>>
>>> struct rte_eventdev {
>>>         ..
>>>         const struct rte_eventdev_rx_adapter_ops *rx_adapter_ops;
>>>         ..
>>> };
>>
>> An eventdev instance can have N adapters not just one. So this will be
>> pointer to pointer, indexed through adapter_id.
>>
>> In SW PMD case, the eventdev may have only one adapter.
>> In HW PMD cases, There will more than one. example,
>> - if octeontx eventdev dev_id + any external PCI n/w card like i40e or
>> nicvf case an adapter with similar adapter ops as SW PMD
>> - if octeontx eventdev dev_id + octeontx ethdev dev_id case another
>> adapter that does not need service cores to inject event to octeontx
>> eventdev
>>
>> - Your generic Rx adapter we will make it as common code so that both SW PMD and
>> octeontx PMD in service core mode as use the functions and register the
>> ops.

OK.
>>
>>
>> I was thinking like this
>>
>> int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id, uint8_t eth_port_id)
>> {
>> 	eventdev_ops = ... /* access through dev_id */
>>
>> 	/* common code to get ops memory from adapter id */
>> 	struct rte_eventdev_rx_adapter_ops* ops = rte_event_pmd_adapter_allocate(id);
>>
>> 	/* Fill in adapter ops from driver */
>> 	eventdev_ops->create_adapter(ops, dev_id, eth_port_id)

As an implementation detail, the function pointer setup needs to account 
for DPDK primary/secondary processes, i.e, function addresses could be 
different in the 2 processes.

>>
>> }
>>
>> int rte_event_eth_rx_adapter_get_info(uint8_t id, struct rte_event_eth_rx_adap_info *info)
>> {

Argument list is missing the dev_id (as are the 
configure/queue_add/queue_del) functions.

>> 	eventdev_ops = ... /* access through dev_id */
>>
>> 	struct rte_eventdev_rx_adapter_ops* ops = eventdev_ops->rx_adapter_ops[id];
>>
>> 	/* adapter specific info ops)
>> 	ops->info(ops,....);
>>
>> }
> 
> IMO, the mapping with your current proposal may go like this.
> 
> - rte_event_eth_rx_adapter_create() // Will just fill the adapter ops from driver, probably
>    your existing rte_event_eth_rx_adapter_init() can go here
> - exiting rte_event_eth_rx_adapter_create() will be more of
> proposed rte_event_eth_rx_adapter_configure() on the given adapter.
> - existing rte_event_eth_rx_adapter_queue_add() maps 1:1. But you can
>    remove eth_dev_id as it is provided proposed rte_event_eth_rx_adapter_create()

See above for the case where the application could call create() with 2 
different ports.

> - same applies to rte_event_eth_rx_adapter_queue_del()
> - exiting rte_event_eth_rx_adapter_stats_get() can be changed to
>    "xstat"(more like eventdev xstat scheme) so based on the adapter
> capability application can get stats)
> 
> 
> flow shall be:
> 1) application calls rte_event_eth_rx_adapter_create(id, eventdev_id,
> ethdev_id) to create the adapter.(Drivers fills the adapter ops based on
> eventdev_id + ethdev_id capabilities)
> 2) application calls rte_event_eth_rx_adapter_info_get() to get the
> capability and other information of the adapter.
> 3) application calls rte_event_eth_rx_adapter_configure() to configure
> based on the application need and the adapter capability.
> 4) application calls rte_event_eth_rx_adapter_queue_add() to link ethdev queues to
> eventdev queues. On this call, The driver creates the service functions or programs the HW registers
> based on the adapter capability to take the events from ethdev and inject to eventdev.
>

The flow looks good to me.

>>
>>
>>>
>>> But from previous emails (see struct rte_event_dev_producer_conf below), it
>>> appears that the rx queue id and event information would be needed to create
>>> the adapter that enqueues from the ethdev queue id to the event pmd, however
>>> that information is available only at queue add time - thoughts ?
>>
>> Just eventdev_id and ethdev_id is enough to create adapter.

OK.

  reply	other threads:[~2017-07-10  6:14 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-09 20:38 [dpdk-dev] [RFC] " Gage Eads
2017-05-11 16:38 ` Jerin Jacob
2017-05-16 20:51   ` Thomas Monjalon
2017-05-24  4:30   ` Rao, Nikhil
2017-06-19 10:05     ` Jerin Jacob
2017-06-26 13:19       ` Jerin Jacob
2017-06-28  6:47         ` Rao, Nikhil
2017-07-06 21:52           ` [dpdk-dev] [PATCH 1/2] " Nikhil Rao
2017-07-06 14:18             ` Jerin Jacob
2017-07-07  6:21               ` Rao, Nikhil
2017-07-07 15:03                 ` Jerin Jacob
2017-07-07 15:57                   ` Jerin Jacob
2017-07-10  6:14                     ` Rao, Nikhil [this message]
2017-07-10 10:41                       ` Jerin Jacob
2017-07-13  3:26                         ` Rao, Nikhil
2017-07-13 18:45                           ` Jerin Jacob
2017-07-27 10:58                             ` Rao, Nikhil
2017-07-29 15:12                               ` Jerin Jacob
2017-07-31  3:57                                 ` Nipun Gupta
2017-07-31 15:31                                   ` Jerin Jacob
2017-08-01  8:40                                 ` Rao, Nikhil
2017-08-01 16:42                                   ` Jerin Jacob
2017-08-02 19:19                                     ` Eads, Gage
2017-08-03  6:23                                       ` Jerin Jacob
2017-08-09  2:23                                         ` Eads, Gage
2017-08-09 16:19                                           ` Jerin Jacob
2017-08-09 19:24                                             ` Eads, Gage
2017-08-10 16:53                                               ` Jerin Jacob
2017-08-14  8:48                                                 ` Rao, Nikhil
2017-08-14 11:11                                                   ` Jerin Jacob
2017-08-16  5:06                                                     ` Rao, Nikhil
2017-08-11  5:25                                     ` Rao, Nikhil
2017-08-11  9:49                                       ` Jerin Jacob
2017-09-04  6:37                                       ` Jerin Jacob
2017-07-06 21:52             ` [dpdk-dev] [PATCH 2/2] eventdev: add event eth rx adapter unit tests Nikhil Rao
2017-07-24 10:10             ` [dpdk-dev] [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues Nipun Gupta
2017-07-24 10:24               ` Jerin Jacob
2017-07-24 11:37                 ` Nipun Gupta
2017-07-24 10:32               ` Van Haaren, Harry
2017-07-24 13:06                 ` Nipun Gupta
2017-07-24 13:33                   ` Van Haaren, Harry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3d2d78cc-9572-bf95-6d25-9b350da62827@intel.com \
    --to=nikhil.rao@intel.com \
    --cc=abhinandan.gujjar@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    --cc=harry.van.haaren@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=narender.vangati@intel.com \
    --cc=nipun.gupta@nxp.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).