From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 7C819910F for ; Tue, 1 Aug 2017 10:40:32 +0200 (CEST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Aug 2017 01:40:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,305,1498546800"; d="scan'208";a="133803650" Received: from nikhilr-mobl.amr.corp.intel.com (HELO [10.106.152.109]) ([10.106.152.109]) by fmsmga005.fm.intel.com with ESMTP; 01 Aug 2017 01:40:27 -0700 To: Jerin Jacob Cc: gage.eads@intel.com, dev@dpdk.org, thomas@monjalon.net, bruce.richardson@intel.com, harry.van.haaren@intel.com, hemant.agrawal@nxp.com, nipun.gupta@nxp.com, narender.vangati@intel.com, Abhinandan Gujjar References: <1499377952-5306-1-git-send-email-nikhil.rao@intel.com> <20170706141829.GA5260@jerin> <02aef899-da84-9281-e4a4-2871237ea20e@intel.com> <20170707150317.GA2007@jerin> <20170707155707.GA6245@jerin> <3d2d78cc-9572-bf95-6d25-9b350da62827@intel.com> <20170710104126.GA13609@jerin> <4197b5f1-9a15-5892-12d2-6bd142bc4d85@intel.com> <20170713184445.GA3659@jerin> <123ed8d6-4fd9-8bee-d86e-d270a092169e@intel.com> <20170729151252.GA25166@jerin> From: "Rao, Nikhil" Message-ID: <7b9ca757-f428-3675-b997-794ec6e96f2a@intel.com> Date: Tue, 1 Aug 2017 14:10:26 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170729151252.GA25166@jerin> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 01 Aug 2017 08:40:34 -0000 On 7/29/2017 8:42 PM, Jerin Jacob wrote: > -----Original Message----- >> Date: Thu, 27 Jul 2017 16:28:29 +0530 >> From: "Rao, Nikhil" >> To: Jerin Jacob >> CC: gage.eads@intel.com, dev@dpdk.org, thomas@monjalon.net, >> bruce.richardson@intel.com, harry.van.haaren@intel.com, >> hemant.agrawal@nxp.com, nipun.gupta@nxp.com, narender.vangati@intel.com, >> Abhinandan Gujjar , nikhil.rao@intel.com >> Subject: Re: [PATCH 1/2] eventdev: add event adapter for ethernet Rx queues >> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 >> Thunderbird/52.2.1 >> >> >> >> In the case of a SW thread we would like to use the servicing weight >> specified in the queue to do WRR across , in keeping with > > OK, then lets work together to address in transparent manner where it > works for HW and SW. > >> the adaper per model, one way to do this is to use the >> same cfg.service_name in the rte_event_eth_rx_adapter_configure() call. >> >> However this creates a few difficulties/inconsistencies: > > I agree. If we are thinking about WRR across then above > proposal implementation creates inconsistencies. On the other side, it create challenges > with HW implementation to have unified adapter API works for both HW and SW. > >> >> 1)Service has the notion of a socket id. Multiple event dev IDs can be >> included in the same service, each event dev has a socket ID -> this seems >> to be an inconsistency that shouldn’t be allowed by design. >> >> 2)Say, the Rx event adapter doesn’t drop packets (could be configurable), >> i.e, if events cannot be enqueued into the event device, these remain in a >> buffer, when the buffer fills up packets aren’t dequeued from the eth >> device. >> >> In the simplest case the Rx event adapter service has a single > device, event port> across multiple eth ports, it dequeues from the wrr[] >> and buffers events, bulk enqueues BATCH_SIZE events into the > event port>. >> >> With adapters having different code can be >> optimized so that adapters that have a common can >> be made to refer to a common enqueue buffer { event dev, event port, buffer >> } structure but this adds more book keeping in the code. >> >> 3)Every adapter can be configured with max_nb_rx ( a max nb of packets that >> it can process in any invocation) – but the max_nb_rx seems like a service >> level parameter instead of it being a summation across adapters. >> >> 1 & 3 could be solved by restricting the adapters to the same (as in the >> first rte_event_eth_rx_adapter_configure() call) socket ID, and perhaps >> using the max value of max_nb_rx or using the same value of max_nb_rx across >> adapters. #2 is doable but has a bit of code complexity to handle the >> generic case. >> >> Before we go there, I wanted to check if there is an alternative possible >> that would remove the difficulties above. Essentially allow multiple ports >> within an adapter but avoid the problem of the inconsistent >> combinations when using multiple ports with a single eventdev. >> >> Instead of >> == >> rte_event_eth_rx_adapter_create() >> rte_event_eth_rx_adapter_get_info(); >> rte_event_eth_rx_adapter_configure(); >> rte_event_eth_rx_adapter_queue_add(); >> == >> >> How about ? >> == >> >> rte_event_eth_rx_adapter_get_info(uint8_t dev_id, uint8_t eth_port_id, >> struct rte_event_eth_rx_adap_info *info); >> >> struct rte_event_eth_rx_adap_info { >> uint32_t cap; >> >> /* adapter has inbuilt port, no need to create producer port */ >> #define RTE_EVENT_ETHDEV_CAP_INBUILT_PORT (1ULL << 0) >> /* adapter does not need service function */ >> #define RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC (1ULL << 1) >> >> } >> >> rte_event_eth_rx_adapter_conf cfg; >> cfg.event_port = event_port; >> cfg.service_name = “rx_adapter_service”; > > Does application need to specify the service name? IMO, it better a > component(rx_adapter) defines it name and format and expose in rte_event_eth_rx_adapter.h > I have had the application specify the name. so that it can call struct rte_service_spec *rte_service_get_by_name(const char *name); followed by int32_t rte_service_enable_on_lcore(struct rte_service_spec *service, uint32_t lcore); given that the application knows the socket id of the event device associated with the adapter. >> >> // all ports in eth_port_id[] have cap = >> //!RTE_EVENT_ETHDEV_CAP_INBUILT_PORT >> // && ! RTE_EVENT_ETHDEV_CAP_NO_SERVICE_FUNC >> rte_event_eth_rx_adapter_create(dev_id, eth_port_id[], N, id, &cfg); > > The downside might be: > - application has different flow based on based on the capability. > Listing down a few capabilities/limitation below. > >> === >> int rte_event_eth_rx_adapter_queue_add() would need a port id in the N>1 >> port case, that can be ignored if the adapter doesn’t need it (N=1). >> >> thanks for reading the long email, thoughts ? > > I have bit another thought to solve the above mentioned downside. > > - Theme is based on your original rx adapter proposal but with eventpmd > ops(not adapter ops).i.e Reuse as much of your existing Rx adapter > implementation as common code and add hooks for HW based adapters. For > example, before you add to "rx_poll" in eth_poll_wrr_calc(), > Check for eventdev PMD ops is available adding it HW. If yes, Don't add in "rx_poll" > > adapter_api > ------------ > int rte_event_eth_rx_adapter_create(id, rte_event_eth_rx_adapter_conf *conf) > int rte_event_eth_rx_adapter_queue_add(uint8_t id, uint8_t eth_dev_id, int32_t rx_queue_id, rte_event_eth_rx_adapter_queue_conf *conf); > > eventdev PMD op api(not as adapter PMD as discussed earlier) > ------------------- > > 1) typedef uint64_t (*eventdev_rx_adap_capa)(struct rte_eventdev *dev, uint8_t ethdev_id) > > Return the adapter capability of a given eventdev when it needs to > connected to a specific ethdev_id > Doesn't the capability of a also need to be made available to the application as an adapter API ? for e.g., so the application can call rte_event_eth_rx_adapter_queue_add() with rx_queue_id = -1 if RX_ADAPTER_CAP_ADD_QUEUE is not set. In the same manner, if I understand RX_ADAPTER_CAP_SET_FLOW_ID, the application would have provide a flow ID in rte_event_eth_rx_adapter_queue_add(), if RX_ADAPTER_CAP_SET_FLOW_ID is not set. Also if a given are connected in HW then is the servicing weight of a queue applicable ? > Possible capability values based on my understating for existing SW and Cavium > HW PMD. NXP folks can add new ones. > > - RX_ADAPTER_CAP_INBUILT_PORT - /* adapter has inbuilt port, no need to create producer port by common code */ > - RX_ADAPTER_CAP_SET_FLOW_ID - /* adapter capable of setting RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID */ > - RX_ADAPTER_CAP_ADD_QUEUE /* adapter capable of adding any specific > ethdev rx queue to any eventdev queue. Some eventdev PMD has a limitation > that once a < ethdev_id , queue_id> connected to specific eventdev queue, > all the all queues_id under the same ethdev_id need to be connected to > same eventdev queue. aka works only on the rte_event_eth_rx_adapter_queue_conf.rx_queue_id == -1 mode, */ > > > 2) typedef int (*eventdev_rx_adap_add)(struct rte_eventdev *dev, uint8_t ethdev_id, int queue_id, rte_event_eth_rx_adapter_queue_conf *conf)); > -- if implemented by eventdev PMD and returns zero then COMMON code does not need to poll */ > > > 3) typedef int (*eventdev_rx_adap_del)(struct rte_eventdev *dev, uint8_t ethdev_id, int queue_id) > -- remove previously added > > > *** Haven't spend a lot of time on API/macro name.Please use better naming conversion. > > > Another notes based on your existing implementation + eventdev ops scheme > > 1) rte_event_eth_rx_adapter_creates() registers service function by > default. It should be delayed to when common adapter code find a device > with !RX_ADAPTER_CAP_INBUILT_PORT cap on rte_event_eth_rx_adapter_queue_add() > > 2) Do we need rx_adapter start/stop functions? Yes. > > 3) If it happens to be case where rte_event_eth_rx_adapter_queue_add() > use only RX_ADAPTER_CAP_INBUILT_PORT then common code should not > create any service. Yes. > > 4) If adapter uses one port with service core and other one with HW > adapter. rte_event_eth_rx_adapter_stats.rx_packets will be not updated > correctly, We need eventdev PMD ops to get those stats. If we agree > overall PMD ops + adapter API partitioning then we can refine additionally > eventpmd for stats etc or xstat based scheme etc. Wouldnt the rx_packets be the sum of the service core thread packet count and the count provided by the eventdev pmd ops and be obtained from a call to rte_event_eth_rx_adapter_stats_get(uint8_t id, struct rte_event_eth_rx_adapter_stats *stats); > > 5) specifying rte_event_eth_rx_adapter_conf.rx_event_port_id on > rte_event_eth_rx_adapter_create() would waste one HW eventdev port if its > happen to be used RX_ADAPTER_CAP_INBUILT_PORT on rte_event_eth_rx_adapter_queue_add(). > unlike SW eventdev port, HW eventdev ports are costly so I think, We > need to have another eventdev PMD ops to create service/producer ports. > Or any other scheme that creates rte_event_eth_rx_adapter_conf.rx_event_port_id > on demand by common code. > One solution is: struct rte_event_eth_rx_adapter_conf { uint8_t dev_id; int (*conf_cb)(uint8_t id, uint8_t port_id, uint32_t flags, struct rte_event_eth_rx_adapter_conf *conf); unsigned int max_nb_rx; int event_port_id; char service_name[]; } Where dev_id and conf_cb have to be specified in the create call, but event_port_id and service_name will be filled in when conf_cb() is invoked if required. flags will specify what parameters of conf need to be filled in. The conf parameter can be separated into a different struct rte_event_eth_rx_adapter_ext_conf { unsigned int max_nb_rx; int event_port_id; char service_name[]; } Nikhil