From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D867A0C45; Wed, 6 Oct 2021 08:51:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DBDA3413FB; Wed, 6 Oct 2021 08:51:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5D66C413FB for ; Wed, 6 Oct 2021 08:51:06 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 195MBVQN012814; Tue, 5 Oct 2021 23:51:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=OEAzPlm0XNyc9CcA0UmZmHRgmVB5NfBjcdKcUmreViw=; b=argwbvBzkIhJjwWv+d3S/tbsg1eVcAmiiXOpSrVZqB48on2hB1wCtAKZfegiPGj1EPkk 8SwpURwPh3ous53krHpclh+YZczJJbDngks4Gb0DNYvevBxlmYHxcKntH7vpXQVCs0Cu E/T9gWdO/AzlDiALUCuYo/Z6LiLWR/0TLmHQtB29Tzv39kqctNtv5aYEe/xfHXp0FRlx dlDsPcpWDeEcUfkRQ/FYib45gsC/fdy2DnFR75RE6cSH+v20u02PUJnsAJr1qQGhM1DJ HO0UkgIdg8tCfq5C2pn2F7RgGhbjfiDXgi4D+TaeLLJcAgRD8qkMZ2pOJO9+vaB0j0R8 /Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 3bgy9d1mcf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 05 Oct 2021 23:51:03 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 5 Oct 2021 23:51:01 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 5 Oct 2021 23:51:01 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 358113F707C; Tue, 5 Oct 2021 23:50:59 -0700 (PDT) From: To: , Ray Kinsella CC: , Pavan Nikhilesh Date: Wed, 6 Oct 2021 12:20:01 +0530 Message-ID: <20211006065012.16508-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211006065012.16508-1-pbhagavatula@marvell.com> References: <20211003082710.8398-1-pbhagavatula@marvell.com> <20211006065012.16508-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: vCst4cywpJ0ziDoZIlDluPEa7KSfQgrg X-Proofpoint-ORIG-GUID: vCst4cywpJ0ziDoZIlDluPEa7KSfQgrg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-10-05_06,2021-10-04_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v3 04/14] eventdev: move inline APIs into separate structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Move fastpath inline function pointers from rte_eventdev into a separate structure accessed via a flat array. The intension is to make rte_eventdev and related structures private to avoid future API/ABI breakages.` Signed-off-by: Pavan Nikhilesh Acked-by: Ray Kinsella --- lib/eventdev/eventdev_pmd.h | 38 +++++++++++ lib/eventdev/eventdev_pmd_pci.h | 4 +- lib/eventdev/eventdev_private.c | 112 +++++++++++++++++++++++++++++++ lib/eventdev/meson.build | 1 + lib/eventdev/rte_eventdev.c | 22 +++++- lib/eventdev/rte_eventdev_core.h | 28 ++++++++ lib/eventdev/version.map | 6 ++ 7 files changed, 209 insertions(+), 2 deletions(-) create mode 100644 lib/eventdev/eventdev_private.c diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 7eb2aa0520..b188280778 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -1189,4 +1189,42 @@ __rte_internal int rte_event_pmd_release(struct rte_eventdev *eventdev); +/** + * + * @internal + * This is the last step of device probing. + * It must be called after a port is allocated and initialized successfully. + * + * @param eventdev + * New event device. + */ +__rte_internal +void +event_dev_probing_finish(struct rte_eventdev *eventdev); + +/** + * Reset eventdevice fastpath APIs to dummy values. + * + * @param fp_ops + * The *fp_ops* pointer to reset. + */ +__rte_internal +void +event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op); + +/** + * Set eventdevice fastpath APIs to event device values. + * + * @param fp_ops + * The *fp_ops* pointer to set. + */ +__rte_internal +void +event_dev_fp_ops_set(struct rte_event_fp_ops *fp_ops, + const struct rte_eventdev *dev); + +#ifdef __cplusplus +} +#endif + #endif /* _RTE_EVENTDEV_PMD_H_ */ diff --git a/lib/eventdev/eventdev_pmd_pci.h b/lib/eventdev/eventdev_pmd_pci.h index 2f12a5eb24..499852db16 100644 --- a/lib/eventdev/eventdev_pmd_pci.h +++ b/lib/eventdev/eventdev_pmd_pci.h @@ -67,8 +67,10 @@ rte_event_pmd_pci_probe_named(struct rte_pci_driver *pci_drv, /* Invoke PMD device initialization function */ retval = devinit(eventdev); - if (retval == 0) + if (retval == 0) { + event_dev_probing_finish(eventdev); return 0; + } RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)" " failed", pci_drv->driver.name, diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c new file mode 100644 index 0000000000..9084833847 --- /dev/null +++ b/lib/eventdev/eventdev_private.c @@ -0,0 +1,112 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include "eventdev_pmd.h" +#include "rte_eventdev.h" + +static uint16_t +dummy_event_enqueue(__rte_unused void *port, + __rte_unused const struct rte_event *ev) +{ + RTE_EDEV_LOG_ERR( + "event enqueue requested for unconfigured event device"); + return 0; +} + +static uint16_t +dummy_event_enqueue_burst(__rte_unused void *port, + __rte_unused const struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR( + "event enqueue burst requested for unconfigured event device"); + return 0; +} + +static uint16_t +dummy_event_dequeue(__rte_unused void *port, __rte_unused struct rte_event *ev, + __rte_unused uint64_t timeout_ticks) +{ + RTE_EDEV_LOG_ERR( + "event dequeue requested for unconfigured event device"); + return 0; +} + +static uint16_t +dummy_event_dequeue_burst(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events, + __rte_unused uint64_t timeout_ticks) +{ + RTE_EDEV_LOG_ERR( + "event dequeue burst requested for unconfigured event device"); + return 0; +} + +static uint16_t +dummy_event_tx_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR( + "event Tx adapter enqueue requested for unconfigured event device"); + return 0; +} + +static uint16_t +dummy_event_tx_adapter_enqueue_same_dest(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR( + "event Tx adapter enqueue same destination requested for unconfigured event device"); + return 0; +} + +static uint16_t +dummy_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR( + "event crypto adapter enqueue requested for unconfigured event device"); + return 0; +} + +void +event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) +{ + static void *dummy_data[RTE_MAX_QUEUES_PER_PORT]; + static const struct rte_event_fp_ops dummy = { + .enqueue = dummy_event_enqueue, + .enqueue_burst = dummy_event_enqueue_burst, + .enqueue_new_burst = dummy_event_enqueue_burst, + .enqueue_forward_burst = dummy_event_enqueue_burst, + .dequeue = dummy_event_dequeue, + .dequeue_burst = dummy_event_dequeue_burst, + .txa_enqueue = dummy_event_tx_adapter_enqueue, + .txa_enqueue_same_dest = + dummy_event_tx_adapter_enqueue_same_dest, + .ca_enqueue = dummy_event_crypto_adapter_enqueue, + .data = dummy_data, + }; + + *fp_op = dummy; +} + +void +event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, + const struct rte_eventdev *dev) +{ + fp_op->enqueue = dev->enqueue; + fp_op->enqueue_burst = dev->enqueue_burst; + fp_op->enqueue_new_burst = dev->enqueue_new_burst; + fp_op->enqueue_forward_burst = dev->enqueue_forward_burst; + fp_op->dequeue = dev->dequeue; + fp_op->dequeue_burst = dev->dequeue_burst; + fp_op->txa_enqueue = dev->txa_enqueue; + fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest; + fp_op->ca_enqueue = dev->ca_enqueue; + fp_op->data = dev->data->ports; +} diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index 8b51fde361..9051ff04b7 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -8,6 +8,7 @@ else endif sources = files( + 'eventdev_private.c', 'rte_eventdev.c', 'rte_event_ring.c', 'eventdev_trace_points.c', diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index bfcfa31cd1..4c30a37831 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -46,6 +46,9 @@ static struct rte_eventdev_global eventdev_globals = { .nb_devs = 0 }; +/* Public fastpath APIs. */ +struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; + /* Event dev north bound API implementation */ uint8_t @@ -300,8 +303,8 @@ int rte_event_dev_configure(uint8_t dev_id, const struct rte_event_dev_config *dev_conf) { - struct rte_eventdev *dev; struct rte_event_dev_info info; + struct rte_eventdev *dev; int diag; RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); @@ -470,10 +473,13 @@ rte_event_dev_configure(uint8_t dev_id, return diag; } + event_dev_fp_ops_reset(rte_event_fp_ops + dev_id); + /* Configure the device */ diag = (*dev->dev_ops->dev_configure)(dev); if (diag != 0) { RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag); + event_dev_fp_ops_reset(rte_event_fp_ops + dev_id); event_dev_queue_config(dev, 0); event_dev_port_config(dev, 0); } @@ -1244,6 +1250,8 @@ rte_event_dev_start(uint8_t dev_id) else return diag; + event_dev_fp_ops_set(rte_event_fp_ops + dev_id, dev); + return 0; } @@ -1284,6 +1292,7 @@ rte_event_dev_stop(uint8_t dev_id) dev->data->dev_started = 0; (*dev->dev_ops->dev_stop)(dev); rte_eventdev_trace_stop(dev_id); + event_dev_fp_ops_reset(rte_event_fp_ops + dev_id); } int @@ -1302,6 +1311,7 @@ rte_event_dev_close(uint8_t dev_id) return -EBUSY; } + event_dev_fp_ops_reset(rte_event_fp_ops + dev_id); rte_eventdev_trace_close(dev_id); return (*dev->dev_ops->dev_close)(dev); } @@ -1435,6 +1445,7 @@ rte_event_pmd_release(struct rte_eventdev *eventdev) if (eventdev == NULL) return -EINVAL; + event_dev_fp_ops_reset(rte_event_fp_ops + eventdev->data->dev_id); eventdev->attached = RTE_EVENTDEV_DETACHED; eventdev_globals.nb_devs--; @@ -1460,6 +1471,15 @@ rte_event_pmd_release(struct rte_eventdev *eventdev) return 0; } +void +event_dev_probing_finish(struct rte_eventdev *eventdev) +{ + if (eventdev == NULL) + return; + + event_dev_fp_ops_set(rte_event_fp_ops + eventdev->data->dev_id, + eventdev); +} static int handle_dev_list(const char *cmd __rte_unused, diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index 115b97e431..4461073101 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -39,6 +39,34 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port, uint16_t nb_events); /**< @internal Enqueue burst of events on crypto adapter */ +struct rte_event_fp_ops { + event_enqueue_t enqueue; + /**< PMD enqueue function. */ + event_enqueue_burst_t enqueue_burst; + /**< PMD enqueue burst function. */ + event_enqueue_burst_t enqueue_new_burst; + /**< PMD enqueue burst new function. */ + event_enqueue_burst_t enqueue_forward_burst; + /**< PMD enqueue burst fwd function. */ + event_dequeue_t dequeue; + /**< PMD dequeue function. */ + event_dequeue_burst_t dequeue_burst; + /**< PMD dequeue burst function. */ + event_tx_adapter_enqueue_t txa_enqueue; + /**< PMD Tx adapter enqueue function. */ + event_tx_adapter_enqueue_t txa_enqueue_same_dest; + /**< PMD Tx adapter enqueue same destination function. */ + event_crypto_adapter_enqueue_t ca_enqueue; + /**< PMD Crypto adapter enqueue function. */ + uintptr_t reserved[2]; + + void **data; + /**< points to array of internal port data pointers */ + uintptr_t reserved2[4]; +} __rte_cache_aligned; + +extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 5f1fe412a4..a3a732089b 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -85,6 +85,9 @@ DPDK_22 { rte_event_timer_cancel_burst; rte_eventdevs; + #added in 21.11 + rte_event_fp_ops; + local: *; }; @@ -141,6 +144,9 @@ EXPERIMENTAL { INTERNAL { global: + event_dev_fp_ops_reset; + event_dev_fp_ops_set; + event_dev_probing_finish; rte_event_pmd_selftest_seqn_dynfield_offset; rte_event_pmd_allocate; rte_event_pmd_get_named_dev; -- 2.17.1