From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18EDDA054F; Tue, 16 Mar 2021 16:49:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D6AB02600A5; Tue, 16 Mar 2021 16:49:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A56B32600A4 for ; Tue, 16 Mar 2021 16:49:23 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12GFeQ3j013396; Tue, 16 Mar 2021 08:49:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=qahsOqE6qIRmGngqEPXfQt/nehEQbPXxz++topzLWGw=; b=jQ4hjoDNCbauG1W1P+rSIpiLPiu94ug37FCk82pb+ffX6mWQvQf8y5Pj78Q2xrbAkLl7 YKAXGjtzvZFb57Aa1nJMyUD/wJ8z2N6GZ12HTTNtCGl/zQH26wWD0FOF/+oNb3k/W3Pe 5Sw0PjlbUZsdy4ltBF1YrOj/ptyGwGcwU/j5sB1j4lwh5i4jAIAwZI3pUqsAQ6nqlwKy lkI1ikxAIPgNkSs1xLe0lSuucnQ4+EQ6MPRJatheF28JlZuwNrWQvh9HU7HbIf8deNsA X8mRuCf/9j9ozSy6v3XRYa9f9S/e3REDAgxbkCey4hqwOM7mUGmwhPwW0st17Fsp2Vje UQ== Received: from dc6wp-exch01.marvell.com ([4.21.29.232]) by mx0a-0016f401.pphosted.com with ESMTP id 378umth0vw-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 16 Mar 2021 08:49:22 -0700 Received: from DC6WP-EXCH01.marvell.com (10.76.176.21) by DC6WP-EXCH01.marvell.com (10.76.176.21) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 16 Mar 2021 11:49:20 -0400 Received: from maili.marvell.com (10.76.176.51) by DC6WP-EXCH01.marvell.com (10.76.176.21) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 16 Mar 2021 11:49:20 -0400 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id BB4343F7050; Tue, 16 Mar 2021 08:49:16 -0700 (PDT) From: To: , , , , , , , , CC: , Pavan Nikhilesh Date: Tue, 16 Mar 2021 21:18:37 +0530 Message-ID: <20210316154846.1518-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210220220957.4583-1-pbhagavatula@marvell.com> References: <20210220220957.4583-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-16_05:2021-03-16, 2021-03-16 signatures=0 Subject: [dpdk-dev] [PATCH v2 0/8] Introduce event vectorization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh In traditional event programming model, events are identified by a flow-id and a uintptr_t. The flow-id uniquely identifies a given event and determines the order of scheduling based on schedule type, the uintptr_t holds a single object. Event devices also support burst mode with configurable dequeue depth, i.e. each dequeue call would return multiple events and each event might be at a different stage of the pipeline. Having a burst of events belonging to different stages in a dequeue burst is not only difficult to vectorize but also increases the scheduler overhead and application overhead of pipelining events further. Using event vectors we see a performance gain of ~628% as shown in [1]. By introducing event vectorization, each event will be capable of holding multiple uintptr_t of the same flow thereby allowing applications to vectorize their pipeline and reduce the complexity of pipelining events across multiple stages. This also reduces the complexity of handling enqueue and dequeue on an event device. Since event devices are transparent to the events they are scheduling so the event producers such as eth_rx_adapter, crypto_adapter , etc.. are responsible for vectorizing the buffers of the same flow into a single event. The series also breaks ABI in the patch [8/8] which is targetted to the v21.11 release. The dpdk-test-eventdev application has been updated with options to test multiple vector sizes and timeouts. [1] As for performance improvement, with a ARM Cortex-A72 equivalent processer, software event device (--vdev=event_sw0), single worker core, single stage and using one service core for Rx adapter, Tx adapter, Scheduling. Without event vectorization: ./build/app/dpdk-test-eventdev -l 7-23 -s 0x700 --vdev="event_sw0" -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=a --wlcores=20 Port[0] using Rx adapter[0] configured Port[0] using Tx adapter[0] Configured 4.728 mpps avg 4.728 mpps With event vectorization: ./build/app/dpdk-test-eventdev -l 7-23 -s 0x700 --vdev="event_sw0" -- --prod_type_ethdev --nb_pkts=0 --verbose 2 --test=pipeline_queue --stlist=a --wlcores=20 --enable_vector --nb_eth_queues 1 --vector_size 256 Port[0] using Rx adapter[0] configured Port[0] using Tx adapter[0] Configured 34.383 mpps avg 34.383 mpps Having dedicated service cores for each Rx queues and tweaking the vector, dequeue burst size would further improve performance. API usage is shown below: Configuration: struct rte_event_eth_rx_adapter_event_vector_config vec_conf; vector_pool = rte_event_vector_pool_create("vector_pool", nb_elem, 0, vector_size, socket_id); rte_event_eth_rx_adapter_create(id, event_id, &adptr_conf); rte_event_eth_rx_adapter_queue_add(id, eth_id, -1, &queue_conf); if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) { vec_conf.vector_sz = vector_size; vec_conf.vector_timeout_ns = vector_tmo_nsec; vec_conf.vector_mp = vector_pool; rte_event_eth_rx_adapter_queue_event_vector_config(id, eth_id, -1, &vec_conf); } Fastpath: num = rte_event_dequeue_burst(event_id, port_id, &ev, 1, 0); if (!num) continue; if (ev.event_type & RTE_EVENT_TYPE_VECTOR) { switch (ev.event_type) { case RTE_EVENT_TYPE_ETHDEV_VECTOR: case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR: struct rte_mbuf **mbufs; mbufs = ev.vector_ev->mbufs; for (i = 0; i < ev.vector_ev->nb_elem; i++) //Process mbufs. break; case ... } } ... v2 Changes: - Multiple gramatical and style fixes.(Jerin) - Add parameter to define vector size in power of 2. (Jerin) - Redo patch series w/o breaking ABI till the last patch.(David) - Add deprication notice to announce ABI break in 21.11.(David) - Add vector limits validation to app/test-eventdev. Pavan Nikhilesh (8): eventdev: introduce event vector capability eventdev: introduce event vector Rx capability eventdev: introduce event vector Tx capability eventdev: add Rx adapter event vector support eventdev: add Tx adapter event vector support app/eventdev: add event vector mode in pipeline test doc: announce event Rx adapter config changes eventdev: simplify Rx adapter event vector config app/test-eventdev/evt_common.h | 4 + app/test-eventdev/evt_options.c | 52 +++ app/test-eventdev/evt_options.h | 4 + app/test-eventdev/test_pipeline_atq.c | 310 ++++++++++++- app/test-eventdev/test_pipeline_common.c | 69 ++- app/test-eventdev/test_pipeline_common.h | 18 + app/test-eventdev/test_pipeline_queue.c | 320 ++++++++++++- .../prog_guide/event_ethernet_rx_adapter.rst | 38 ++ .../prog_guide/event_ethernet_tx_adapter.rst | 12 + doc/guides/prog_guide/eventdev.rst | 36 +- doc/guides/rel_notes/deprecation.rst | 9 + doc/guides/tools/testeventdev.rst | 28 ++ lib/librte_eventdev/eventdev_pmd.h | 31 +- .../rte_event_eth_rx_adapter.c | 305 ++++++++++++- .../rte_event_eth_rx_adapter.h | 68 +++ .../rte_event_eth_tx_adapter.c | 66 ++- lib/librte_eventdev/rte_eventdev.c | 11 +- lib/librte_eventdev/rte_eventdev.h | 428 +++++++++++------- lib/librte_eventdev/version.map | 4 + 19 files changed, 1568 insertions(+), 245 deletions(-) -- 2.17.1