From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f51.google.com (mail-wm0-f51.google.com [74.125.82.51]) by dpdk.org (Postfix) with ESMTP id 994E35598 for ; Wed, 23 Nov 2016 19:39:13 +0100 (CET) Received: by mail-wm0-f51.google.com with SMTP id t79so35866343wmt.0 for ; Wed, 23 Nov 2016 10:39:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:user-agent:in-reply-to :references:mime-version:content-transfer-encoding; bh=0yA0fW2Ed+iLR9eyyIyVJ+lN8Jb/HYx9O/as8YInu+Q=; b=g1PNmkZgF8LJOOwZK67dGBcs4UDYt5bUIjf3mhFzAcLg5Anx9ZH0V2fKBTNZfqOolS O2GVJfPd0kOvIyZLSduO+AJUAXu+USH9X86X9S6lUu7PB56PK0rS/xAn3pWaEnURKJ16 BzQBi2IVqKLy50kIOcQDrI/2B6tlXY1D8DporxBMonjQ93YxxlKJojanKGa/bGuJ5K/P eSNmAoUT22gKbJ9LOKNSuadlq5kbBSOSyQK+l/bVqpHP0ZgS5oyOx1PLpr8kaOT9BawA zWNRVIWXI/yjF93zxwl7yAZZ3hbcjvyJWrQqnf9XQETNYZaIi0zkktKRHAqaYAOXGnQ1 zB6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:user-agent :in-reply-to:references:mime-version:content-transfer-encoding; bh=0yA0fW2Ed+iLR9eyyIyVJ+lN8Jb/HYx9O/as8YInu+Q=; b=Qltxir3rfLmiGQjEPNlBOolWG6JsViHUcJC023karjtptxRcM/NpK6JwrRl6cfFvoH 2Quf5kNRw5oU2V59arj1FW2HPHM4t5CeZ97W4HvdehBbfgrWBLAimiKwKefC75hkmJTF TCQ4KDNKhwRmPGjJMxiNRgbAdpe09ZG4iJ76XSpqBcqCrEx4NfMKMUchugnBGu61Ytl0 L6guC63VxM5551mOf5UmzCJjj61meGssloMR3fm47sEJu8Z0JHqFF6A7f7U++M8Ya+qg U1CSnv5Sch6R2KJmgidYV5yca9f6WpxG9wDHSUDoGYIWTiVmCZ1zfeaEQYAMFTAapUsr UHXA== X-Gm-Message-State: AKaTC01Q4Va0ecqXYtCWwsIvBpge8KcF05p70d6ZwsPJ4+4DKS2SVVZEsXFfTSWadOZd4h1b X-Received: by 10.28.19.196 with SMTP id 187mr4921701wmt.81.1479926352943; Wed, 23 Nov 2016 10:39:12 -0800 (PST) Received: from xps13.localnet (112.204.154.77.rev.sfr.net. [77.154.204.112]) by smtp.gmail.com with ESMTPSA id n5sm3130605wmf.0.2016.11.23.10.39.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Nov 2016 10:39:11 -0800 (PST) From: Thomas Monjalon To: Jerin Jacob Cc: dev@dpdk.org, bruce.richardson@intel.com, harry.van.haaren@intel.com, hemant.agrawal@nxp.com, gage.eads@intel.com Date: Wed, 23 Nov 2016 19:39:09 +0100 Message-ID: <3691745.y1f1NvKTEv@xps13> User-Agent: KMail/4.14.10 (Linux/4.5.4-1-ARCH; KDE/4.14.11; x86_64; ; ) In-Reply-To: <1479447902-3700-2-git-send-email-jerin.jacob@caviumnetworks.com> References: <1479447902-3700-1-git-send-email-jerin.jacob@caviumnetworks.com> <1479447902-3700-2-git-send-email-jerin.jacob@caviumnetworks.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH 1/4] eventdev: introduce event driven programming model X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Nov 2016 18:39:13 -0000 Hi Jerin, Thanks for bringing a big new piece in DPDK. I made some comments below. 2016-11-18 11:14, Jerin Jacob: > +Eventdev API - EXPERIMENTAL > +M: Jerin Jacob > +F: lib/librte_eventdev/ OK to mark it experimental. What is the plan to remove the experimental word? > + * RTE event device drivers do not use interrupts for enqueue or dequeue > + * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue > + * functions to applications. To the question "what makes DPDK different" it could be answered that DPDK event drivers implement polling functions :) > +#include > + > +#include > +#include > +#include Is it possible to remove some of these includes from the API? > + > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton > +/**< Skeleton event device PMD name */ I do not understand this #define. And it is not properly prefixed. > +struct rte_event_dev_info { > + const char *driver_name; /**< Event driver name */ > + struct rte_pci_device *pci_dev; /**< PCI information */ There is some work in progress to remove PCI information from ethdev. Please do not add any PCI related structure in eventdev. The generic structure is rte_device. > +struct rte_event_dev_config { > + uint32_t dequeue_wait_ns; > + /**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device. Please explain exactly when the wait occurs and why. > + * This value should be in the range of *min_dequeue_wait_ns* and > + * *max_dequeue_wait_ns* which previously provided in > + * rte_event_dev_info_get() > + * \see RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT I think the @see syntax would be more consistent than \see. > + uint8_t nb_event_port_dequeue_depth; > + /**< Number of dequeue queue depth for any event port on this device. I think it deserves more explanations. > + uint32_t event_dev_cfg; > + /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/ How this field differs from others in the struct? Should it be named flags? > + uint32_t event_queue_cfg; /**< Queue config flags(EVENT_QUEUE_CFG_) */ Same comment about the naming of this field for event_queue config sruct. > +/** Event port configuration structure */ > +struct rte_event_port_conf { > + int32_t new_event_threshold; > + /**< A backpressure threshold for new event enqueues on this port. > + * Use for *closed system* event dev where event capacity is limited, > + * and cannot exceed the capacity of the event dev. > + * Configuring ports with different thresholds can make higher priority > + * traffic less likely to be backpressured. > + * For example, a port used to inject NIC Rx packets into the event dev > + * can have a lower threshold so as not to overwhelm the device, > + * while ports used for worker pools can have a higher threshold. > + * This value cannot exceed the *nb_events_limit* > + * which previously supplied to rte_event_dev_configure() > + */ > + uint8_t dequeue_depth; > + /**< Configure number of bulk dequeues for this event port. > + * This value cannot exceed the *nb_event_port_dequeue_depth* > + * which previously supplied to rte_event_dev_configure() > + */ > + uint8_t enqueue_depth; > + /**< Configure number of bulk enqueues for this event port. > + * This value cannot exceed the *nb_event_port_enqueue_depth* > + * which previously supplied to rte_event_dev_configure() > + */ > +}; The depth configuration is not clear to me. > +/* Event types to classify the event source */ Why this classification is needed? > +#define RTE_EVENT_TYPE_ETHDEV 0x0 > +/**< The event generated from ethdev subsystem */ > +#define RTE_EVENT_TYPE_CRYPTODEV 0x1 > +/**< The event generated from crypodev subsystem */ > +#define RTE_EVENT_TYPE_TIMERDEV 0x2 > +/**< The event generated from timerdev subsystem */ > +#define RTE_EVENT_TYPE_CORE 0x3 > +/**< The event generated from core. What is core? > +/* Event enqueue operations */ I feel a longer explanation is needed here to describe what is an operation and where this data is useful. > +#define RTE_EVENT_OP_NEW 0 > +/**< New event without previous context */ > +#define RTE_EVENT_OP_FORWARD 1 > +/**< Re-enqueue previously dequeued event */ > +#define RTE_EVENT_OP_RELEASE 2 There is no comment for the release operation. > +/** > + * Release the flow context associated with the schedule type. > + * [...] > + */ There is no function declaration below this comment. > +/** > + * The generic *rte_event* structure to hold the event attributes > + * for dequeue and enqueue operation > + */ > +struct rte_event { > + /** WORD0 */ > + RTE_STD_C11 > + union { > + uint64_t event; [...] > + }; > + /** WORD1 */ > + RTE_STD_C11 > + union { > + uintptr_t event_ptr; I wonder if it can be a problem to have the size of this field not constant across machines. > + /**< Opaque event pointer */ > + struct rte_mbuf *mbuf; > + /**< mbuf pointer if dequeued event is associated with mbuf */ How do we know that an event is associated with mbuf? Does it mean that such events are always converted into mbuf even if the application does not need it? > +struct rte_eventdev_driver; > +struct rte_eventdev_ops; I think it is better to split API and driver interface in two files. (we should do this split in ethdev) > +/** > + * Enqueue the event object supplied in the *rte_event* structure on an > + * event device designated by its *dev_id* through the event port specified by > + * *port_id*. The event object specifies the event queue on which this > + * event will be enqueued. > + * > + * @param dev_id > + * Event device identifier. > + * @param port_id > + * The identifier of the event port. > + * @param ev > + * Pointer to struct rte_event > + * > + * @return > + * - 0 on success > + * - <0 on failure. Failure can occur if the event port's output queue is > + * backpressured, for instance. > + */ > +static inline int > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev) Is it really needed to have non-burst variant of enqueue/dequeue? > +/** > + * Converts nanoseconds to *wait* value for rte_event_dequeue() > + * > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then > + * application can use this function to convert wait value in nanoseconds to > + * implementations specific wait value supplied in rte_event_dequeue() Why is it implementation-specific? Why this conversion is not internal in the driver? End of review for this patch ;)