From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A752142987; Wed, 19 Apr 2023 11:55:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B77CC42BC9; Wed, 19 Apr 2023 11:54:59 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2048.outbound.protection.outlook.com [40.107.244.48]) by mails.dpdk.org (Postfix) with ESMTP id 2B8D54021F for ; Wed, 19 Apr 2023 11:54:57 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iYmtVKoP4CQ1qqin+LXgKe8iCIbp5eacTmNNZlTtqDbuutTf8CCax98gEe8lMvRP89jna13XF3eSibH3e+PHNqa8yWwqS5AQsH9/CqyJw8VCDitYaS1dGF2sutOlnsCOxBJoF76arM+FYubREjWUVRJUQ4V+j55AoMBxXnWgf+QmoiR2vqfc0P1fXF8yVAmZMhT7mvbzYk7ayCmRKiQ8VHZ4u2mtbs+l26McdXn9Q+7wsv63RTGLycUsLkwkMQuTSV/Mqfrauqyid+epccqeoUlZcVIz7PspDuJ07J7cPnHk9WeXuNyC4T2kejc/a/2hrb7tIbVwLBOpZlPXZqvWoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Y9u5VavSgwxf/d3A+V7O5v9Vyb39OlI8MnbudmmyDtY=; b=bHu3F4QjOxW7aqBrzlHFpMbNAWfTcGrLtPQXSTrBQRkeEb7rMl8tfi7+6vUfQvLSlN8Gp5/YGZFn7SdgftGkI3SDa3M85DQ5xRQH73WkJk/nzPdC3+U7t67zs5sJkAdoe3jwuq88iUlw3R0v2UhS1rxskp2prnYQRqJCntwYSwlSlcAx4mKC3aGEU8T/U7frX19IQYu650bZuXocdaA4pSa+9ozHQGGeOlNkBh4Os0aFKOnEKa0vNe5XQIPNMLI/MxTvlDZNChWykceMzKRbCQUdjITAGMxjJ9yQdmSEt7tW9fLYGL+97EdNjUGStCV+mQrrRezHkN5mUMEabNMSEA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=intel.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Y9u5VavSgwxf/d3A+V7O5v9Vyb39OlI8MnbudmmyDtY=; b=dOqHvR/BdsyhIQCg9oZniXL7oJTZE9SGuJ9rpCzTbxYlCEbO2teh72GyaI2CgMxYu1Fa5mccV0IRQRfeu1Up6vQxpOpQcz9DuBHinaodWS6715EmOegPMZQGgPOAviJD8c2fTp9zgbNxbA+ZsVVbvqkZbPMvJdSANKa93wvqLqI= Received: from DS7PR05CA0050.namprd05.prod.outlook.com (2603:10b6:8:2f::9) by MN2PR12MB4175.namprd12.prod.outlook.com (2603:10b6:208:1d3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Wed, 19 Apr 2023 09:54:54 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2f:cafe::5c) by DS7PR05CA0050.outlook.office365.com (2603:10b6:8:2f::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 09:54:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6319.22 via Frontend Transport; Wed, 19 Apr 2023 09:54:54 +0000 Received: from telco-siena.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 19 Apr 2023 04:54:53 -0500 From: Sivaprasad Tummala To: , , CC: Subject: [RFC PATCH 3/5] eventdev: support optional dequeue callbacks Date: Wed, 19 Apr 2023 02:54:25 -0700 Message-ID: <20230419095427.563185-3-sivaprasad.tummala@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230419095427.563185-1-sivaprasad.tummala@amd.com> References: <20230419095427.563185-1-sivaprasad.tummala@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT021:EE_|MN2PR12MB4175:EE_ X-MS-Office365-Filtering-Correlation-Id: 3a9080a5-254a-481f-eb7b-08db40bc209e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ni2/SRSLdiGRSeTidB9qlnaYShT/Iuz7SwHZnRVAmrfVN/fB5bmN4DoLYQuLQoUw2XoLozvScsr7uQB7Q7bi76hVUN2T84evfrH4/+PWiTbvGL+cqdLW5ojERAJXUvB4wcj1t42g3kveYN8AQeijS+bCIoK4LWUH0qSaFFNzKq9QxbiMhZ0gIsJPKw6FiyZz7qVjHSMSZlfwozNYqz5ccOjIjpROfK7DKPQi2qP9VPYhgdFyUmJJewsGwpe/ZE2BTkeOLZawE3LhNF2pLLhWcgrIF5SEgepuLmgxKl8JEsH30wC6lAgXA4SGznTExv8vO+zQsrPiAoNQSyrc/Kb9YYtPvpG7qeXKr0LdXp4lBlGe+Z03FEo5Ap10OUe6gVaxG9z7GtOGhFk8B6TEyxkoUhdKo5wsipzTL+Xtxb5qduyZE5jB5HAp/Hpupk01nGYJBKavJdaGboXiWbKuQvWZq70Lxu/LAL83OICsQgmu3Z5ugnab1VShfG/zEBOdnPD7swjq1AtxjFe+jXFeqI0J5PcdVaxCk14RAPK/jpEZnYdtcymRDXbIe4ceh7AtNnVadg1tmKebijDGLKgK/kCpThKZrVhipR29djXT3/J5SzTB9kKzfb64tE1Gek12sbRJl6A/5Q8ua13or+qm0B5SZHpntH8Y5+kyLnjTFDtf/hqoRcUIrDXxb0wUIa+2SnDopA0ZxBpBELmcEDTzSxqnuH92P6nn1CLuFDGtAWiHpv4= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(136003)(346002)(39860400002)(451199021)(40470700004)(36840700001)(46966006)(356005)(81166007)(30864003)(2906002)(82740400003)(40480700001)(40460700003)(478600001)(6666004)(7696005)(2616005)(83380400001)(36860700001)(47076005)(26005)(1076003)(336012)(426003)(36756003)(186003)(86362001)(41300700001)(5660300002)(16526019)(316002)(44832011)(8936002)(8676002)(82310400005)(110136005)(4326008)(70206006)(70586007)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2023 09:54:54.7434 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3a9080a5-254a-481f-eb7b-08db40bc209e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4175 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add optional support for inline event processing within dequeue call. For a dequeue callback, events dequeued from the event port were passed them to a callback function if configured, to allow additional processing. e.g. unpack batch of packets from each event on dequeue, before passing back to the application. Signed-off-by: Sivaprasad Tummala --- lib/eventdev/eventdev_pmd.h | 17 ++++ lib/eventdev/eventdev_private.c | 17 ++++ lib/eventdev/rte_eventdev.c | 78 +++++++++++++++++ lib/eventdev/rte_eventdev.h | 145 ++++++++++++++++++++++++++++++- lib/eventdev/rte_eventdev_core.h | 12 ++- lib/eventdev/version.map | 6 ++ 6 files changed, 272 insertions(+), 3 deletions(-) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 7b12f80f57..c87e06993f 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -97,6 +97,19 @@ struct rte_eventdev_global { uint8_t nb_devs; /**< Number of devices found */ }; +/** + * @internal + * Structure used to hold information about the callbacks to be called for a + * port on dequeue. + */ +struct rte_event_dequeue_callback { + struct rte_event_dequeue_callback *next; + union{ + rte_dequeue_callback_fn dequeue; + } fn; + void *param; +}; + /** * @internal * The data part, with no function pointers, associated with each device. @@ -173,6 +186,10 @@ struct rte_eventdev { /**< Pointer to PMD dequeue burst function. */ event_maintain_t maintain; /**< Pointer to PMD port maintenance function. */ + struct rte_event_dequeue_callback *post_dequeue_burst_cbs[RTE_EVENT_MAX_PORTS_PER_DEV]; + /**< User-supplied functions called from dequeue_burst to post-process + * received packets before passing them to the user + */ event_tx_adapter_enqueue_t txa_enqueue_same_dest; /**< Pointer to PMD eth Tx adapter burst enqueue function with * events destined to same Eth port & Tx queue. diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 1d3d9d357e..6d1cbdb17d 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -118,4 +118,21 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest; fp_op->ca_enqueue = dev->ca_enqueue; fp_op->data = dev->data->ports; + fp_op->ev_port.clbk = (void **)(uintptr_t)dev->post_dequeue_burst_cbs; + fp_op->ev_port.data = dev->data->ports; +} + +uint16_t +rte_event_dequeue_callbacks(uint8_t dev_id, uint8_t port_id, + struct rte_event *ev, uint16_t nb_events, void *opaque) +{ + static uint16_t nb_rx; + const struct rte_event_dequeue_callback *cb = opaque; + + while (cb != NULL) { + nb_rx = cb->fn.dequeue(dev_id, port_id, ev, + nb_events, cb->param); + cb = cb->next; + } + return nb_rx; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index ff77194783..0d43cb2d0a 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -38,6 +38,9 @@ static struct rte_eventdev_global eventdev_globals = { /* Public fastpath APIs. */ struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; +/* spinlock for add/remove dequeue callbacks */ +static rte_spinlock_t event_dev_dequeue_cb_lock = RTE_SPINLOCK_INITIALIZER; + /* Event dev north bound API implementation */ uint8_t @@ -860,6 +863,81 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id, return 0; } +const struct rte_event_dequeue_callback * +rte_event_add_dequeue_callback(uint8_t dev_id, uint8_t port_id, + rte_dequeue_callback_fn fn, void *user_param) +{ + struct rte_eventdev *dev; + struct rte_event_dequeue_callback *cb; + struct rte_event_dequeue_callback *tail; + + /* check input parameters */ + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, NULL); + dev = &rte_eventdevs[dev_id]; + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return NULL; + } + + cb = rte_zmalloc(NULL, sizeof(*cb), 0); + if (cb == NULL) { + rte_errno = ENOMEM; + return NULL; + } + cb->fn.dequeue = fn; + cb->param = user_param; + + rte_spinlock_lock(&event_dev_dequeue_cb_lock); + /* Add the callbacks in fifo order. */ + tail = rte_eventdevs[dev_id].post_dequeue_burst_cbs[port_id]; + if (!tail) { + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n( + &rte_eventdevs[dev_id].post_dequeue_burst_cbs[port_id], + cb, __ATOMIC_RELEASE); + } else { + while (tail->next) + tail = tail->next; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + __atomic_store_n(&tail->next, cb, __ATOMIC_RELEASE); + } + rte_spinlock_unlock(&event_dev_dequeue_cb_lock); + + return cb; +} + +int +rte_event_remove_dequeue_callback(uint8_t dev_id, uint8_t port_id, + const struct rte_event_dequeue_callback *user_cb) +{ + struct rte_eventdev *dev; + + /* Check input parameters. */ + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_eventdevs[dev_id]; + if (user_cb == NULL || !is_valid_port(dev, port_id)) + return -EINVAL; + + rte_spinlock_lock(&event_dev_dequeue_cb_lock); + prev_cb = &dev->post_dequeue_burst_cbs[port_id]; + for (; *prev_cb != NULL; prev_cb = &cb->next) { + cb = *prev_cb; + if (cb == user_cb) { + /* Remove the user cb from the callback list. */ + __atomic_store_n(prev_cb, cb->next, __ATOMIC_RELAXED); + ret = 0; + break; + } + } + rte_spinlock_unlock(&event_dev_dequeue_cb_lock); + + return ret; +} + int rte_event_port_get_monitor_addr(uint8_t dev_id, uint8_t port_id, struct rte_power_monitor_cond *pmc) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 841b1fb9b5..9ccd259058 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -948,6 +948,100 @@ void rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, rte_eventdev_port_flush_t release_cb, void *args); +struct rte_event_dequeue_callback; + +/** + * Function type used for dequeue event processing callbacks. + * + * The callback function is called on dequeue with a burst of events that have + * been received on the given event port. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param[out] ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * for output to be populated with the dequeued event objects. + * @param nb_events + * The maximum number of event objects to dequeue, typically number of + * rte_event_port_dequeue_depth() available for this port. + * @param opaque + * Opaque pointer of event port callback related data. + * + * @return + * The number of event objects returned to the user. + */ +typedef uint16_t (*rte_dequeue_callback_fn)(uint8_t dev_id, uint8_t port_id, + struct rte_event *ev, uint16_t nb_events, void *user_param); + +/** + * Add a callback to be called on event dequeue on a given event device port. + * + * This API configures a function to be called for each burst of + * events dequeued on a given event device port. The return value is a pointer + * that can be used to later remove the callback using + * rte_event_remove_dequeue_callback(). + * + * Multiple functions are called in the order that they are added. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param fn + * The callback function + * @param user_param + * A generic pointer parameter which will be passed to each invocation of the + * callback function on this event device port. Inter-thread synchronization + * of any user data changes is the responsibility of the user. + * + * @return + * NULL on error. + * On success, a pointer value which can later be used to remove the callback. + */ +__rte_experimental +const struct rte_event_dequeue_callback * +rte_event_add_dequeue_callback(uint8_t dev_id, uint8_t port_id, + rte_dequeue_callback_fn fn, void *user_param); + +/** + * Remove a dequeue event callback from a given event device port. + * + * This API is used to removed callbacks that were added to a event device port + * using rte_event_add_dequeue_callback(). + * + * Note: the callback is removed from the callback list but it isn't freed + * since the it may still be in use. The memory for the callback can be + * subsequently freed back by the application by calling rte_free(): + * + * - Immediately - if the device is stopped, or the user knows that no + * callbacks are in flight e.g. if called from the thread doing dequeue + * on that port. + * + * - After a short delay - where the delay is sufficient to allow any + * in-flight callbacks to complete. Alternately, the RCU mechanism can be + * used to detect when data plane threads have ceased referencing the + * callback memory. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param user_cb + * The callback function + * + * @return + * - 0: Success. Callback was removed. + * - -ENODEV: If *dev_id* is invalid. + * - -EINVAL: The port_id is out of range, or the callback + * is NULL. + */ +__rte_experimental +int +rte_event_remove_dequeue_callback(uint8_t dev_id, uint8_t port_id, + const struct rte_event_dequeue_callback *user_cb); + /** * The queue depth of the port on the enqueue side */ @@ -2133,6 +2227,34 @@ rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id, fp_ops->enqueue_forward_burst); } +/** + * @internal + * Helper routine for rte_event_dequeue_burst(). + * Should be called at exit from PMD's rte_event_dequeue() implementation. + * Does necessary post-processing - invokes dequeue callbacks if any, etc. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param[out] ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * for output to be populated with the dequeued event objects. + * @param nb_events + * The maximum number of event objects to dequeue, typically number of + * rte_event_port_dequeue_depth() available for this port. + * @param opaque + * Opaque pointer of event port callback related data. + * + * @return + * The number of event objects actually dequeued from the port. The return + * value can be less than the value of the *nb_events* parameter when the + * event port's queue is not full. + */ +__rte_experimental +uint16_t rte_event_dequeue_callbacks(uint8_t dev_id, uint8_t port_id, + struct rte_event *ev, uint16_t nb_events, void *opaque); + /** * Dequeue a burst of events objects or an event object from the event port * designated by its *event_port_id*, on an event device designated @@ -2205,6 +2327,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[], { const struct rte_event_fp_ops *fp_ops; void *port; + uint16_t nb_rx; fp_ops = &rte_event_fp_ops[dev_id]; port = fp_ops->data[port_id]; @@ -2226,10 +2349,28 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[], * requests nb_events as const one */ if (nb_events == 1) - return (fp_ops->dequeue)(port, ev, timeout_ticks); + nb_rx = fp_ops->dequeue(port, ev, timeout_ticks); else - return (fp_ops->dequeue_burst)(port, ev, nb_events, + nb_rx = fp_ops->dequeue_burst(port, ev, nb_events, timeout_ticks); + + { + void *cb; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = __atomic_load_n((void **)&fp_ops->ev_port.clbk[port_id], + __ATOMIC_RELAXED); + if (unlikely(cb != NULL)) + nb_rx = rte_event_dequeue_callbacks(dev_id, port_id, + ev, nb_rx, cb); + } + + return nb_rx; } #define RTE_EVENT_DEV_MAINT_OP_FLUSH (1 << 0) diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index c328bdbc82..b364ecc2a5 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -42,6 +42,14 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port, uint16_t nb_events); /**< @internal Enqueue burst of events on crypto adapter */ +struct rte_eventdev_port_data { + void **data; + /**< points to array of internal port data pointers */ + void **clbk; + /**< points to array of port callback data pointers */ +}; +/**< @internal Structure used to hold opaque eventdev port data. */ + struct rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -65,7 +73,9 @@ struct rte_event_fp_ops { /**< PMD Tx adapter enqueue same destination function. */ event_crypto_adapter_enqueue_t ca_enqueue; /**< PMD Crypto adapter enqueue function. */ - uintptr_t reserved[6]; + struct rte_eventdev_port_data ev_port; + /**< Eventdev port data. */ + uintptr_t reserved[3]; } __rte_cache_aligned; extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 89068a5713..8ce54f5017 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -131,6 +131,12 @@ EXPERIMENTAL { rte_event_eth_tx_adapter_runtime_params_init; rte_event_eth_tx_adapter_runtime_params_set; rte_event_timer_remaining_ticks_get; + + # added in 23.07 + rte_event_dequeue_callbacks + rte_event_add_dequeue_callback + rte_event_remove_dequeue_callback + rte_event_port_get_monitor_addr }; INTERNAL { -- 2.34.1