From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E41A45A7A; Tue, 1 Oct 2024 15:19:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D375A40664; Tue, 1 Oct 2024 15:19:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7556A40673 for ; Tue, 1 Oct 2024 15:19:15 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 491BQYWs006945; Tue, 1 Oct 2024 06:19:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=W kKpLlgw0vs3vS8rQBEE5JBK3R1LlZIkpv+ExhJVOIA=; b=BxLM1IHA3b1bg62xH dqJJqWyleBNca/74wPx6O3WExdYkGEjM1i06SVrJTJdydfYT4kMvYTvUPVT1j0nk ETVq3k/SKAosoPBVIpCEY4Btj+7bzFaJ5dDU3Dbu71WIUn8ZGaGDRMxUMY/NQYPn 4lvOXD0jixt+Wk1GURk0h/C5DaBb0u+x98v4qA1Mw6ImnlTEFqO/l8Hs0aYb2q/k IDjSuXVmg0uKNFPiapAkgZNoYABJuLOvVRyGOdyW8vN2kUI4VPIUJ6FW9PQL0V5H brIfuUbkiWT7YK0x4thZtPITggyNtsqjrZTw2xpPBI5PA1qWKM/kcjbb97/IlkjX uk15A== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 420g6tresp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:14 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:13 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id E47B03F705D; Tue, 1 Oct 2024 06:19:08 -0700 (PDT) From: To: , , , , , , , , CC: , Pavan Nikhilesh Subject: [PATCH v4 1/6] eventdev: introduce event pre-scheduling Date: Tue, 1 Oct 2024 18:48:56 +0530 Message-ID: <20241001131901.7920-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: JEOFBuIMltRz4_sM6lNI2P3N2k2pjAP- X-Proofpoint-GUID: JEOFBuIMltRz4_sM6lNI2P3N2k2pjAP- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Event pre-scheduling improves scheduling performance by assigning events to event ports in advance when dequeues are issued. The dequeue operation initiates the pre-schedule operation, which completes in parallel without affecting the dequeued event flow contexts and dequeue latency. Event devices can indicate pre-scheduling capabilities using `RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE` and `RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE` via the event device info function `info.event_dev_cap`. Applications can select the pre-schedule type and configure it through `rte_event_dev_config.preschedule_type` during `rte_event_dev_configure`. The supported pre-schedule types are: * `RTE_EVENT_DEV_PRESCHEDULE_NONE` - No pre-scheduling. * `RTE_EVENT_DEV_PRESCHEDULE` - Always issue a pre-schedule on dequeue. * `RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE` - Delay issuing pre-schedule until there are no forward progress constraints with the held flow contexts. Signed-off-by: Pavan Nikhilesh --- app/test/test_eventdev.c | 108 ++++++++++++++++++++ doc/guides/eventdevs/features/default.ini | 1 + doc/guides/prog_guide/eventdev/eventdev.rst | 22 ++++ doc/guides/rel_notes/release_24_11.rst | 8 ++ lib/eventdev/rte_eventdev.h | 48 +++++++++ 5 files changed, 187 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index e4e234dc98..d75fc8fbbc 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1250,6 +1250,112 @@ test_eventdev_profile_switch(void) return TEST_SUCCESS; } +static int +preschedule_test(rte_event_dev_preschedule_type_t preschedule_type, const char *preschedule_name) +{ +#define NB_EVENTS 1024 + uint64_t start, total; + struct rte_event ev; + int rc, cnt; + + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id = 0; + ev.op = RTE_EVENT_OP_NEW; + ev.u64 = 0xBADF00D0; + + for (cnt = 0; cnt < NB_EVENTS; cnt++) { + ev.flow_id = cnt; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + } + + RTE_SET_USED(preschedule_type); + total = 0; + while (cnt) { + start = rte_rdtsc_precise(); + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + if (rc) { + total += rte_rdtsc_precise() - start; + cnt--; + } + } + printf("Preschedule type : %s, avg cycles %" PRIu64 "\n", preschedule_name, + total / NB_EVENTS); + + return TEST_SUCCESS; +} + +static int +preschedule_configure(rte_event_dev_preschedule_type_t type, struct rte_event_dev_info *info) +{ + struct rte_event_dev_config dev_conf; + struct rte_event_queue_conf qcfg; + struct rte_event_port_conf pcfg; + int rc; + + devconf_set_default_sane_values(&dev_conf, info); + dev_conf.nb_event_ports = 1; + dev_conf.nb_event_queues = 1; + dev_conf.preschedule_type = type; + + rc = rte_event_dev_configure(TEST_DEV_ID, &dev_conf); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + + rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config"); + rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup port0"); + + rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config"); + rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0"); + + rc = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0); + TEST_ASSERT(rc == (int)dev_conf.nb_event_queues, "Failed to link port, device %d", + TEST_DEV_ID); + + rc = rte_event_dev_start(TEST_DEV_ID); + TEST_ASSERT_SUCCESS(rc, "Failed to start event device"); + + return 0; +} + +static int +test_eventdev_preschedule_configure(void) +{ + struct rte_event_dev_info info; + int rc; + + rte_event_dev_info_get(TEST_DEV_ID, &info); + + if ((info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) == 0) + return TEST_SKIPPED; + + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_NONE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_NONE, "RTE_EVENT_DEV_PRESCHEDULE_NONE"); + TEST_ASSERT_SUCCESS(rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE_NONE"); + + rte_event_dev_stop(TEST_DEV_ID); + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE, "RTE_EVENT_DEV_PRESCHEDULE"); + TEST_ASSERT_SUCCESS(rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE"); + + if (info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) { + rte_event_dev_stop(TEST_DEV_ID); + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, + "RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); + TEST_ASSERT_SUCCESS( + rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); + } + + return TEST_SUCCESS; +} + static int test_eventdev_close(void) { @@ -1310,6 +1416,8 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_start_stop), TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, test_eventdev_profile_switch), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_preschedule_configure), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, test_eventdev_link), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 1cc4303fe5..c8d5ed2d74 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -22,6 +22,7 @@ carry_flow_id = maintenance_free = runtime_queue_attr = profile_links = +preschedule = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst index fb6dfce102..341b9bb2c6 100644 --- a/doc/guides/prog_guide/eventdev/eventdev.rst +++ b/doc/guides/prog_guide/eventdev/eventdev.rst @@ -357,6 +357,28 @@ Worker path: // Process the event received. } +Event Pre-scheduling +~~~~~~~~~~~~~~~~~~~~ + +Event pre-scheduling improves scheduling performance by assigning events to event ports in advance +when dequeues are issued. +The `rte_event_dequeue_burst` operation initiates the pre-schedule operation, which completes +in parallel without affecting the dequeued event flow contexts and dequeue latency. +On the next dequeue operation, the pre-scheduled events are dequeued and pre-schedule is initiated +again. + +An application can use event pre-scheduling if the event device supports it at either device +level or at a individual port level. +The application can check pre-schedule capability by checking if ``rte_event_dev_info.event_dev_cap`` +has the bit ``RTE_EVENT_DEV_CAP_PRESCHEDULE`` set, if present pre-scheduling can be enabled at device +configuration time by setting appropriate pre-schedule type in ``rte_event_dev_config.preschedule``. + +Currently, the following pre-schedule types are supported: + * ``RTE_EVENT_DEV_PRESCHEDULE_NONE`` - No pre-scheduling. + * ``RTE_EVENT_DEV_PRESCHEDULE`` - Always issue a pre-schedule when dequeue is issued. + * ``RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE`` - Issue pre-schedule when dequeue is issued and there are + no forward progress constraints. + Starting the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 0ff70d9057..eae5cc326b 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -55,6 +55,14 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added event device pre-scheduling support.** + + Added support for pre-scheduling of events to event ports to improve + scheduling performance and latency. + + * Added ``rte_event_dev_config::preschedule_type`` to configure the device + level pre-scheduling type. + Removed Items ------------- diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 08e5f9320b..5ea7f5a07b 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -446,6 +446,30 @@ struct rte_event; * @see RTE_SCHED_TYPE_PARALLEL */ +#define RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE (1ULL << 16) +/**< Event device supports event pre-scheduling. + * + * When this capability is available, the application can enable event pre-scheduling on the event + * device to pre-schedule events to a event port when `rte_event_dequeue_burst()` + * is issued. + * The pre-schedule process starts with the `rte_event_dequeue_burst()` call and the + * pre-scheduled events are returned on the next `rte_event_dequeue_burst()` call. + * + * @see rte_event_dev_configure() + */ + +#define RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE (1ULL << 17) +/**< Event device supports adaptive event pre-scheduling. + * + * When this capability is available, the application can enable adaptive pre-scheduling + * on the event device where the events are pre-scheduled when there are no forward + * progress constraints with the currently held flow contexts. + * The pre-schedule process starts with the `rte_event_dequeue_burst()` call and the + * pre-scheduled events are returned on the next `rte_event_dequeue_burst()` call. + * + * @see rte_event_dev_configure() + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority level for events and queues. @@ -680,6 +704,25 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, * @see rte_event_dequeue_timeout_ticks(), rte_event_dequeue_burst() */ +typedef enum { + RTE_EVENT_DEV_PRESCHEDULE_NONE = 0, + /* Disable pre-schedule across the event device or on a given event port. + * @ref rte_event_dev_config.preschedule_type + */ + RTE_EVENT_DEV_PRESCHEDULE, + /* Enable pre-schedule always across the event device or a given event port. + * @ref rte_event_dev_config.preschedule_type + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE + */ + RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, + /* Enable adaptive pre-schedule across the event device or a given event port. + * Delay issuing pre-schedule until there are no forward progress constraints with + * the held flow contexts. + * @ref rte_event_dev_config.preschedule_type + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE + */ +} rte_event_dev_preschedule_type_t; + /** Event device configuration structure */ struct rte_event_dev_config { uint32_t dequeue_timeout_ns; @@ -752,6 +795,11 @@ struct rte_event_dev_config { * optimized for single-link usage, this field is a hint for how many * to allocate; otherwise, regular event ports and queues will be used. */ + rte_event_dev_preschedule_type_t preschedule_type; + /**< Event pre-schedule type to use across the event device, if supported. + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE + */ }; /** -- 2.25.1