From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by dpdk.org (Postfix) with ESMTP id 37D4D4C92 for ; Tue, 11 Sep 2018 10:03:14 +0200 (CEST) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id E33F540087 for ; Tue, 11 Sep 2018 10:03:13 +0200 (CEST) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id CD3664000F; Tue, 11 Sep 2018 10:03:13 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on bernadotte.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=ALL_TRUSTED,AWL autolearn=disabled version=3.4.1 X-Spam-Score: -0.9 Received: from isengard.friendlyfire.se (host-90-232-156-190.mobileonline.telia.com [90.232.156.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 2C4EA4007C; Tue, 11 Sep 2018 10:03:11 +0200 (CEST) From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= To: jerin.jacob@caviumnetworks.com Cc: bruce.richardson@intel.com, dev@dpdk.org, =?UTF-8?q?Mattias=20R=C3=B6nnblom?= Date: Tue, 11 Sep 2018 10:02:09 +0200 Message-Id: <20180911080216.3017-4-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180911080216.3017-1-mattias.ronnblom@ericsson.com> References: <20180911080216.3017-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH v3 03/10] event/dsw: add DSW port configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Sep 2018 08:03:14 -0000 Allow port setup and release in the DSW event device. Signed-off-by: Mattias Rönnblom --- drivers/event/dsw/dsw_evdev.c | 60 +++++++++++++++++++++++++++++++++++ drivers/event/dsw/dsw_evdev.h | 28 ++++++++++++++++ 2 files changed, 88 insertions(+) diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c index 1500d2426..91b1a2449 100644 --- a/drivers/event/dsw/dsw_evdev.c +++ b/drivers/event/dsw/dsw_evdev.c @@ -9,6 +9,62 @@ #define EVENTDEV_NAME_DSW_PMD event_dsw +static int +dsw_port_setup(struct rte_eventdev *dev, uint8_t port_id, + const struct rte_event_port_conf *conf) +{ + struct dsw_evdev *dsw = dsw_pmd_priv(dev); + struct dsw_port *port; + struct rte_event_ring *in_ring; + char ring_name[RTE_RING_NAMESIZE]; + + port = &dsw->ports[port_id]; + + *port = (struct dsw_port) { + .id = port_id, + .dsw = dsw, + .dequeue_depth = conf->dequeue_depth, + .enqueue_depth = conf->enqueue_depth, + .new_event_threshold = conf->new_event_threshold + }; + + snprintf(ring_name, sizeof(ring_name), "dsw%d_p%u", dev->data->dev_id, + port_id); + + in_ring = rte_event_ring_create(ring_name, DSW_IN_RING_SIZE, + dev->data->socket_id, + RING_F_SC_DEQ|RING_F_EXACT_SZ); + + if (in_ring == NULL) + return -ENOMEM; + + port->in_ring = in_ring; + + dev->data->ports[port_id] = port; + + return 0; +} + +static void +dsw_port_def_conf(struct rte_eventdev *dev __rte_unused, + uint8_t port_id __rte_unused, + struct rte_event_port_conf *port_conf) +{ + *port_conf = (struct rte_event_port_conf) { + .new_event_threshold = 1024, + .dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH / 4, + .enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH / 4 + }; +} + +static void +dsw_port_release(void *p) +{ + struct dsw_port *port = p; + + rte_event_ring_free(port->in_ring); +} + static int dsw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, const struct rte_event_queue_conf *conf) @@ -81,12 +137,16 @@ dsw_configure(const struct rte_eventdev *dev) struct dsw_evdev *dsw = dsw_pmd_priv(dev); const struct rte_event_dev_config *conf = &dev->data->dev_conf; + dsw->num_ports = conf->nb_event_ports; dsw->num_queues = conf->nb_event_queues; return 0; } static struct rte_eventdev_ops dsw_evdev_ops = { + .port_setup = dsw_port_setup, + .port_def_conf = dsw_port_def_conf, + .port_release = dsw_port_release, .queue_setup = dsw_queue_setup, .queue_def_conf = dsw_queue_def_conf, .queue_release = dsw_queue_release, diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h index 5eda8d114..2a4f10421 100644 --- a/drivers/event/dsw/dsw_evdev.h +++ b/drivers/event/dsw/dsw_evdev.h @@ -5,6 +5,7 @@ #ifndef _DSW_EVDEV_H_ #define _DSW_EVDEV_H_ +#include #include #define DSW_PMD_NAME RTE_STR(event_dsw) @@ -23,6 +24,31 @@ #define DSW_MAX_FLOWS (1<<(DSW_MAX_FLOWS_BITS)) #define DSW_MAX_FLOWS_MASK (DSW_MAX_FLOWS-1) +/* The rings are dimensioned so that all in-flight events can reside + * on any one of the port rings, to avoid the trouble of having to + * care about the case where there's no room on the destination port's + * input ring. + */ +#define DSW_IN_RING_SIZE (DSW_MAX_EVENTS) + +struct dsw_port { + uint16_t id; + + /* Keeping a pointer here to avoid container_of() calls, which + * are expensive since they are very frequent and will result + * in an integer multiplication (since the port id is an index + * into the dsw_evdev port array). + */ + struct dsw_evdev *dsw; + + uint16_t dequeue_depth; + uint16_t enqueue_depth; + + int32_t new_event_threshold; + + struct rte_event_ring *in_ring __rte_cache_aligned; +} __rte_cache_aligned; + struct dsw_queue { uint8_t schedule_type; uint16_t num_serving_ports; @@ -31,6 +57,8 @@ struct dsw_queue { struct dsw_evdev { struct rte_eventdev_data *data; + struct dsw_port ports[DSW_MAX_PORTS]; + uint16_t num_ports; struct dsw_queue queues[DSW_MAX_QUEUES]; uint8_t num_queues; }; -- 2.17.1