From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B228A0A02; Thu, 25 Mar 2021 14:14:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C5C5E4067B; Thu, 25 Mar 2021 14:14:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2EA7B40147 for ; Thu, 25 Mar 2021 14:14:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12PDCTT0001048; Thu, 25 Mar 2021 06:14:11 -0700 Received: from nam12-dm6-obe.outbound.protection.outlook.com (mail-dm6nam12lp2174.outbound.protection.outlook.com [104.47.59.174]) by mx0b-0016f401.pphosted.com with ESMTP id 37dgjp92gj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 25 Mar 2021 06:14:11 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VVEAo9pDzXk9Vz9gJwZ717k8PyMR1schCpePEMiObYcVRRHBNWo+kAnpp/C+9nN7m6T8zdVUII5VJXVzrAJypQL0cJTFE8EXL8J/wL/JJCSiFtyycIvKsosn6xAr6/llY6v4uiALoqyssua5w/Z79zTdhQKYpOvvUjuRHlomWKgpXYvgj0/WbsLUQvioMfqQd8uAhhymvogXSv8yS9W8+NVc4i4OGQSOH6z49DIlzaefFcCV5DQ2+GyzzzdOLG27X826//PEnZ+tUNuhuvoGIA84Shjfyyo6vMo/Cch/ehzPnqoZcQ9oLsC+ZNV6fkBMrVOOASEgY9ngOlOl7X9Few== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AOdDWJVeS1/dA4CO25wXbA5CvNIf8tbDX77A6eESc10=; b=PrrD9LcuvvO49Tz78vuu3kCTbwNUECQkTHGds1gxtYcKMNBpfpiYH37Ssznsth5yKdWX8THQ5alwaDBuOzUgQJjcODs69fQ1xP9+jkvzpe1+e+UpL0S/rT4yVDgzkLtXjPpulhqDZEVLZqJgRiQ0MFgAYMJdQJMGgyDCATmL31HuOfuaNiDfWJtrIfOanRtOKppCZQKMvkVy24w/reU3UyRRWpkxMyH1IZIQZbNHpCG+AroN5qn/OIB9CkgQIylXE31Y1hO4sM4bH71tiOtbkVmfOsmQyIYv1yAXygUlc5mwVrLXgxtrIBA07MAS0J9UGDr5lkVfECZ+eOGwIDqaOw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AOdDWJVeS1/dA4CO25wXbA5CvNIf8tbDX77A6eESc10=; b=DV6k71A8Cj2hy/4vw1DJKmxZeqRrheymznv2zd20rwgPZfswl00+FLlY7AkKzoHjS2fCI89wlgsURshwQLHqAVXvPgDf/8zkLAq0pGvmdKdRMRhOnL5JIsAxINq2zGJ48JsD0WUYruSUFbtY6Jozgf2TnJI6f8toWlJXEoMJBW8= Received: from PH0PR18MB4086.namprd18.prod.outlook.com (2603:10b6:510:3::9) by PH0PR18MB3830.namprd18.prod.outlook.com (2603:10b6:510:2a::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18; Thu, 25 Mar 2021 13:14:08 +0000 Received: from PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d]) by PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d%4]) with mapi id 15.20.3955.024; Thu, 25 Mar 2021 13:14:07 +0000 From: Pavan Nikhilesh Bhagavatula To: "Jayatheerthan, Jay" , Jerin Jacob Kollanukkaran , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support Thread-Index: AQHXIGtryqJvuwky8UW0Hy2pQLdSM6qUhMcAgAAajTA= Date: Thu, 25 Mar 2021 13:14:07 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-5-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=marvell.com; x-originating-ip: [49.37.166.152] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 37bce919-347a-4b3e-2737-08d8ef8fdf5a x-ms-traffictypediagnostic: PH0PR18MB3830: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8273; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: +AFFVHCuIu4pLjFZeJi4JJk2lqa3iemVW9hF6szZmkZma4hWBiSpqr9u1FBqy9eKIwyMPDz0pdWde+bTfbWutSE0uyY46oQ/fc5G+b3C9RWApnQjrvFUTusuhNM67ucIFnYN0Z6ZXcrU8WFQKD9OHdzLKIqbTCWKCu6SmlvdJRW5TuKhUQ7dwLLROcNk1Rxxy9R7ROLyL2YUqozxzkqCpaXw4ZNT9Omal3L1iLLUobvJ3iujf/NqgLA1k9qrH0gNd8Gyp7dFvGnvDkpSBxC2HNqsnUwOqVNGLxEuxrOruxJbMb/amx5zHnYaRcjh7z4ieNCRwR1o8bYVh5nF+VY6aqxFk8Ylh314joxhWLagPss0o0cFtxutX/Cddu/3HzD4MsYE5XdLL9ZSq9R72UjpY8eAW7Qg70uYJmV90/W2qGEIk0x0Pi/qX1pE7zZ2m5pcSqhPBgLxUllTdd/EZdxSyL/oechtmoQy427NP3/bHkIyLwnlicrAR3VgyjJ8B2kRhT9ctoqkbhBa9QNa7zERN7B874IQVwB/AM3LYYR/YWrNzRrU1oYd+FI3QEjq5e8pxhedKhuEuZQ/y+0wOj6hekMAjVr6dRgW2aFa4YKoSJK3WXeVoEzHkSrxWFSmf69xprrHmj3TskjuvDGfvEx0hBygK7ZrrstW/Y+9kt92tjuErtTwfM8HWP4Xlinvt+1y x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR18MB4086.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(396003)(346002)(39860400002)(366004)(376002)(136003)(53546011)(921005)(55016002)(4326008)(110136005)(26005)(6506007)(38100700001)(52536014)(83380400001)(9686003)(316002)(478600001)(30864003)(33656002)(71200400001)(8676002)(86362001)(66946007)(7696005)(186003)(2906002)(66446008)(64756008)(8936002)(76116006)(66556008)(5660300002)(66476007); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?PbjIYTbq4zwVzeL8h8JHerYOHmM0A7xoM8chQKbJxHxz3UJD/DXD1fJ07mDn?= =?us-ascii?Q?g7lzKwoCC+v+4I7GF1RHXUSY2EHNeP17U5qowBHxNOQCwB9qDP6Iu1aCHTJA?= =?us-ascii?Q?biCMEjsyOaozlgNN7ezTBNVzXCkmg7MM4SA97ExjXih2vTzenMSBoSYrMAtD?= =?us-ascii?Q?nTMkTyyR8T5C25E+6Lp59s7eAiZRN09hVVlKNixVLzZliXad4RMW1BJwMCVQ?= =?us-ascii?Q?FBnO+IB3ebW1fxI1BkTqRP3ydJinJOW1bf4XbzyvNcW1+OYLtFtpmML/GASB?= =?us-ascii?Q?uqJAVtZGlTxN468579PiZqPdCD0p3GyFLe5RDkV4Ts4H+ivUX2bM+2PNTGnw?= =?us-ascii?Q?BVkg3+ECC+ggNEfFMDNAHoyIDKcQ/CpjjlNHkOTE9Pb2ByIa4o3PPrLdC4VG?= =?us-ascii?Q?6PCrI5XDviYxCvYHfEI8zmAvhrAXLulV0Z4tzwX3+94fpXyStgDYm20Iu0yr?= =?us-ascii?Q?XaND6FEPmH1Q2Uk3BqXdIjjJ7wsr1l+jzdufbPdv4FFWBj66PmOySJw2DO5c?= =?us-ascii?Q?kD2TO9aJeNSKc89JBeKUcCwFqa6IyLRAd1oupIMzhszMbgXYn1EE35ilwXxD?= =?us-ascii?Q?JMxKoZekjbNSYbozsStPPDuFurBzJcdaO5Zkr4iDnYOQb7eY0JIG4/XDT8vU?= =?us-ascii?Q?ww3arBuk8CAYxzcfQ4BoXz8NoFyP8JfWFpTvihECp2aIYxOZv8lnB8/c6naU?= =?us-ascii?Q?Fn/Jms4svDwLBI563pkS7whjT876t8C/LKahsJ8Du1ibUUsQtpmq2Fbui2H1?= =?us-ascii?Q?1wrjqSPIE61gbE1H38CKYnFdp10g+gGGkXdMlL2B0CQ7sXoLFynD8ySWSMf0?= =?us-ascii?Q?2duQ2RFLvQLfCZIdCnVzANML213UAp4GSv9mY0hyneN7Y6bPk7JvaZ7R5kZA?= =?us-ascii?Q?RAMkYEO+jJrbNPZ57zlhJ8uYUz+TtcE7UlD+7yT6FtafDtcAgkOBAhhVtPlu?= =?us-ascii?Q?Cazunt3wY5hpVx+5mK0xYzkZTYYAHbtcv1QMHzaQMGAniIN6MzmgKd/oQfZe?= =?us-ascii?Q?poOqZNZ9Y56r7s8xuiuOzvIClOZKwrq6F6dKwquGmQYWLkJfkuHog8WLXbDy?= =?us-ascii?Q?XbXmKvCt4uuIm4b5E4HFfjmTFO8MlkK/q6ZLoSCYDKSuoGlKkLKaZcxyqrrC?= =?us-ascii?Q?wx1SvBs42hLVIrANTWXoXH7jw+oj2SaDhTX8mCT2aPgsZ9HsejMc0pKnT/da?= =?us-ascii?Q?gbuMPNh91jDHJmjIhbs+JkJfCJT8NZUhyu+eyM38CZL3UrrIC6KVsRwMXXWC?= =?us-ascii?Q?+sSxW7OfdH/SK23Q6DHy+20HguL5IlttHdD+l558iUAtmI5jhJu7aX2eWgBi?= =?us-ascii?Q?HCKlQvRKvBLODgtxmOyRE0QH?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: marvell.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR18MB4086.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 37bce919-347a-4b3e-2737-08d8ef8fdf5a X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Mar 2021 13:14:07.7582 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 6KV7yfoDpBDVQpMrhX7gwr3V9Bh8q++bpeTfrE5ndnDB9o634rkabGgLwhM/hYUucaCK9v9NUutG5x16aL3UK95rErjn9HZXwUMn8DU3irY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR18MB3830 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-25_03:2021-03-24, 2021-03-25 signatures=0 Subject: Re: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >-----Original Message----- >From: Jayatheerthan, Jay >Sent: Thursday, March 25, 2021 4:07 PM >To: Pavan Nikhilesh Bhagavatula ; Jerin >Jacob Kollanukkaran ; Carrillo, Erik G >; Gujjar, Abhinandan S >; McDaniel, Timothy >; hemant.agrawal@nxp.com; Van >Haaren, Harry ; mattias.ronnblom >; Ma, Liang J > >Cc: dev@dpdk.org >Subject: [EXT] RE: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter >event vector support > >External Email > >---------------------------------------------------------------------- >> -----Original Message----- >> From: pbhagavatula@marvell.com >> Sent: Wednesday, March 24, 2021 10:35 AM >> To: jerinj@marvell.com; Jayatheerthan, Jay >; Carrillo, Erik G >; Gujjar, >> Abhinandan S ; McDaniel, Timothy >; hemant.agrawal@nxp.com; Van >> Haaren, Harry ; mattias.ronnblom >; Ma, Liang J >> >> Cc: dev@dpdk.org; Pavan Nikhilesh >> Subject: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event >vector support >> >> From: Pavan Nikhilesh >> >> Add event vector support for event eth Rx adapter, the >implementation >> creates vector flows based on port and queue identifier of the >received >> mbufs. >> >> Signed-off-by: Pavan Nikhilesh >> --- >> lib/librte_eventdev/eventdev_pmd.h | 7 +- >> .../rte_event_eth_rx_adapter.c | 257 ++++++++++++++++-- >> lib/librte_eventdev/rte_eventdev.c | 6 +- >> 3 files changed, 250 insertions(+), 20 deletions(-) >> >> diff --git a/lib/librte_eventdev/eventdev_pmd.h >b/lib/librte_eventdev/eventdev_pmd.h >> index 9297f1433..0f724ac85 100644 >> --- a/lib/librte_eventdev/eventdev_pmd.h >> +++ b/lib/librte_eventdev/eventdev_pmd.h >> @@ -69,9 +69,10 @@ extern "C" { >> } \ >> } while (0) >> >> -#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \ >> - > ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | >\ >> - > (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ)) >> +#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP >\ >> + ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | >\ >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) | >\ >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) >> >> #define RTE_EVENT_CRYPTO_ADAPTER_SW_CAP \ >> > RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c >b/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> index ac8ba5bf0..c71990078 100644 >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> @@ -26,6 +26,10 @@ >> #define BATCH_SIZE 32 >> #define BLOCK_CNT_THRESHOLD 10 >> #define ETH_EVENT_BUFFER_SIZE (4*BATCH_SIZE) >> +#define MAX_VECTOR_SIZE 1024 >> +#define MIN_VECTOR_SIZE 4 >> +#define MAX_VECTOR_NS 1E9 >> +#define MIN_VECTOR_NS 1E5 >> >> #define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32 >> #define ETH_RX_ADAPTER_MEM_NAME_LEN 32 >> @@ -59,6 +63,20 @@ struct eth_rx_poll_entry { >> uint16_t eth_rx_qid; >> }; >> >> +struct eth_rx_vector_data { >> + TAILQ_ENTRY(eth_rx_vector_data) next; >> + uint16_t port; >> + uint16_t queue; >> + uint16_t max_vector_count; >> + uint64_t event; >> + uint64_t ts; >> + uint64_t vector_timeout_ticks; >> + struct rte_mempool *vector_pool; >> + struct rte_event_vector *vector_ev; >> +} __rte_cache_aligned; >> + >> +TAILQ_HEAD(eth_rx_vector_data_list, eth_rx_vector_data); >> + >> /* Instance per adapter */ >> struct rte_eth_event_enqueue_buffer { >> /* Count of events in this buffer */ >> @@ -92,6 +110,14 @@ struct rte_event_eth_rx_adapter { >> uint32_t wrr_pos; >> /* Event burst buffer */ >> struct rte_eth_event_enqueue_buffer event_enqueue_buffer; >> + /* Vector enable flag */ >> + uint8_t ena_vector; >> + /* Timestamp of previous vector expiry list traversal */ >> + uint64_t prev_expiry_ts; >> + /* Minimum ticks to wait before traversing expiry list */ >> + uint64_t vector_tmo_ticks; >> + /* vector list */ >> + struct eth_rx_vector_data_list vector_list; >> /* Per adapter stats */ >> struct rte_event_eth_rx_adapter_stats stats; >> /* Block count, counts up to BLOCK_CNT_THRESHOLD */ >> @@ -198,9 +224,11 @@ struct eth_device_info { >> struct eth_rx_queue_info { >> int queue_enabled; /* True if added */ >> int intr_enabled; >> + uint8_t ena_vector; >> uint16_t wt; /* Polling weight */ >> uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else >0 */ >> uint64_t event; >> + struct eth_rx_vector_data vector_data; >> }; >> >> static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; >> @@ -722,6 +750,9 @@ rxa_flush_event_buffer(struct >rte_event_eth_rx_adapter *rx_adapter) >> &rx_adapter->event_enqueue_buffer; >> struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter- >>stats; >> >> + if (!buf->count) >> + return 0; >> + >> uint16_t n =3D rte_event_enqueue_new_burst(rx_adapter- >>eventdev_id, >> rx_adapter->event_port_id, >> buf->events, >> @@ -742,6 +773,72 @@ rxa_flush_event_buffer(struct >rte_event_eth_rx_adapter *rx_adapter) >> return n; >> } >> >> +static inline uint16_t >> +rxa_create_event_vector(struct rte_event_eth_rx_adapter >*rx_adapter, >> + struct eth_rx_queue_info *queue_info, >> + struct rte_eth_event_enqueue_buffer *buf, >> + struct rte_mbuf **mbufs, uint16_t num) >> +{ >> + struct rte_event *ev =3D &buf->events[buf->count]; >> + struct eth_rx_vector_data *vec; >> + uint16_t filled, space, sz; >> + >> + filled =3D 0; >> + vec =3D &queue_info->vector_data; >> + while (num) { >> + if (vec->vector_ev =3D=3D NULL) { >> + if (rte_mempool_get(vec->vector_pool, >> + (void **)&vec->vector_ev) < >0) { >> + rte_pktmbuf_free_bulk(mbufs, num); >> + return 0; >> + } >> + vec->vector_ev->nb_elem =3D 0; >> + vec->vector_ev->port =3D vec->port; >> + vec->vector_ev->queue =3D vec->queue; >> + vec->vector_ev->attr_valid =3D true; >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, >vec, next); >> + } else if (vec->vector_ev->nb_elem =3D=3D vec- >>max_vector_count) { > >Is there a case where nb_elem > max_vector_count as we accumulate >sz to it ? I don't think so, that would overflow the vector event. > >> + /* Event ready. */ >> + ev->event =3D vec->event; >> + ev->vec =3D vec->vector_ev; >> + ev++; >> + filled++; >> + vec->vector_ev =3D NULL; >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, >next); >> + if (rte_mempool_get(vec->vector_pool, >> + (void **)&vec->vector_ev) < >0) { >> + rte_pktmbuf_free_bulk(mbufs, num); >> + return 0; >> + } >> + vec->vector_ev->nb_elem =3D 0; >> + vec->vector_ev->port =3D vec->port; >> + vec->vector_ev->queue =3D vec->queue; >> + vec->vector_ev->attr_valid =3D true; >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, >vec, next); >> + } >> + >> + space =3D vec->max_vector_count - vec->vector_ev- >>nb_elem; >> + sz =3D num > space ? space : num; >> + memcpy(vec->vector_ev->mbufs + vec->vector_ev- >>nb_elem, mbufs, >> + sizeof(void *) * sz); >> + vec->vector_ev->nb_elem +=3D sz; >> + num -=3D sz; >> + mbufs +=3D sz; >> + vec->ts =3D rte_rdtsc(); >> + } >> + >> + if (vec->vector_ev->nb_elem =3D=3D vec->max_vector_count) { > >Same here. > >> + ev->event =3D vec->event; >> + ev->vec =3D vec->vector_ev; >> + ev++; >> + filled++; >> + vec->vector_ev =3D NULL; >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); >> + } >> + >> + return filled; >> +} > >I am seeing more than one repeating code chunks in this function. >Perhaps, you can give it a try to not repeat. We can drop if its >performance affecting. I will try to move them to inline functions and test. > >> + >> static inline void >> rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, >> uint16_t eth_dev_id, >> @@ -770,25 +867,30 @@ rxa_buffer_mbufs(struct >rte_event_eth_rx_adapter *rx_adapter, >> rss_mask =3D ~(((m->ol_flags & PKT_RX_RSS_HASH) !=3D 0) - 1); >> do_rss =3D !rss_mask && !eth_rx_queue_info->flow_id_mask; > >The RSS related code is executed for vector case as well. Can this be >moved inside ena_vector if condition ? RSS is used to generate the event flowid, in vector case the flow id=20 Will be a combination of port and queue id.=20 The idea is that flows having the same RSS LSB will end up in the same=20 queue. > >> >> - for (i =3D 0; i < num; i++) { >> - m =3D mbufs[i]; >> - >> - rss =3D do_rss ? >> - rxa_do_softrss(m, rx_adapter->rss_key_be) : >> - m->hash.rss; >> - ev->event =3D event; >> - ev->flow_id =3D (rss & ~flow_id_mask) | >> - (ev->flow_id & flow_id_mask); >> - ev->mbuf =3D m; >> - ev++; >> + if (!eth_rx_queue_info->ena_vector) { >> + for (i =3D 0; i < num; i++) { >> + m =3D mbufs[i]; >> + >> + rss =3D do_rss ? rxa_do_softrss(m, rx_adapter- >>rss_key_be) >> + : m->hash.rss; >> + ev->event =3D event; >> + ev->flow_id =3D (rss & ~flow_id_mask) | >> + (ev->flow_id & flow_id_mask); >> + ev->mbuf =3D m; >> + ev++; >> + } >> + } else { >> + num =3D rxa_create_event_vector(rx_adapter, >eth_rx_queue_info, >> + buf, mbufs, num); >> } >> >> - if (dev_info->cb_fn) { >> + if (num && dev_info->cb_fn) { >> >> dropped =3D 0; >> nb_cb =3D dev_info->cb_fn(eth_dev_id, rx_queue_id, >> - ETH_EVENT_BUFFER_SIZE, buf- >>count, ev, >> - num, dev_info->cb_arg, >&dropped); >> + ETH_EVENT_BUFFER_SIZE, buf- >>count, >> + &buf->events[buf->count], >num, >> + dev_info->cb_arg, &dropped); > >Before this patch, we pass ev which is &buf->events[buf->count] + num >as fifth param when calling cb_fn. Now, we are passing &buf- >>events[buf->count] for non-vector case. Do you see this as an issue? > The callback function takes in the array newly formed events i.e. we need to pass the start of array and the count. the previous code had a bug where it passes the end of the event list. >Also, for vector case would it make sense to do pass &buf->events[buf- >>count] + num ? > >> if (unlikely(nb_cb > num)) >> RTE_EDEV_LOG_ERR("Rx CB returned %d (> %d) >events", >> nb_cb, num); >> @@ -1124,6 +1226,30 @@ rxa_poll(struct rte_event_eth_rx_adapter >*rx_adapter) >> return nb_rx; >> } >> >> +static void >> +rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) >> +{ >> + struct rte_event_eth_rx_adapter *rx_adapter =3D arg; >> + struct rte_eth_event_enqueue_buffer *buf =3D >> + &rx_adapter->event_enqueue_buffer; >> + struct rte_event *ev; >> + >> + if (buf->count) >> + rxa_flush_event_buffer(rx_adapter); >> + >> + if (vec->vector_ev->nb_elem =3D=3D 0) >> + return; >> + ev =3D &buf->events[buf->count]; >> + >> + /* Event ready. */ >> + ev->event =3D vec->event; >> + ev->vec =3D vec->vector_ev; >> + buf->count++; >> + >> + vec->vector_ev =3D NULL; >> + vec->ts =3D 0; >> +} >> + >> static int >> rxa_service_func(void *args) >> { >> @@ -1137,6 +1263,24 @@ rxa_service_func(void *args) >> return 0; >> } >> >> + if (rx_adapter->ena_vector) { >> + if ((rte_rdtsc() - rx_adapter->prev_expiry_ts) >=3D >> + rx_adapter->vector_tmo_ticks) { >> + struct eth_rx_vector_data *vec; >> + >> + TAILQ_FOREACH(vec, &rx_adapter->vector_list, >next) { >> + uint64_t elapsed_time =3D rte_rdtsc() - >vec->ts; >> + >> + if (elapsed_time >=3D vec- >>vector_timeout_ticks) { >> + rxa_vector_expire(vec, >rx_adapter); >> + TAILQ_REMOVE(&rx_adapter- >>vector_list, >> + vec, next); >> + } >> + } >> + rx_adapter->prev_expiry_ts =3D rte_rdtsc(); >> + } >> + } >> + >> stats =3D &rx_adapter->stats; >> stats->rx_packets +=3D rxa_intr_ring_dequeue(rx_adapter); >> stats->rx_packets +=3D rxa_poll(rx_adapter); >> @@ -1640,6 +1784,28 @@ rxa_update_queue(struct >rte_event_eth_rx_adapter *rx_adapter, >> } >> } >> >> +static void >> +rxa_set_vector_data(struct eth_rx_queue_info *queue_info, >uint16_t vector_count, >> + uint64_t vector_ns, struct rte_mempool *mp, int32_t >qid, >> + uint16_t port_id) >> +{ >> +#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) >> + struct eth_rx_vector_data *vector_data; >> + uint32_t flow_id; >> + >> + vector_data =3D &queue_info->vector_data; >> + vector_data->max_vector_count =3D vector_count; >> + vector_data->port =3D port_id; >> + vector_data->queue =3D qid; >> + vector_data->vector_pool =3D mp; >> + vector_data->vector_timeout_ticks =3D >> + NSEC2TICK(vector_ns, rte_get_timer_hz()); >> + vector_data->ts =3D 0; >> + flow_id =3D queue_info->event & 0xFFFFF; >> + flow_id =3D flow_id =3D=3D 0 ? (qid & 0xFF) | (port_id & 0xFFFF) : >flow_id; > >Maybe I am missing something here. Looking at the code it looks like >qid and port_id may overlap. For e.g., if qid =3D 0x10 and port_id =3D 0x1= 1, >flow_id would end up being 0x11. Is this the expectation? Also, it may >be useful to document flow_id format. The flow_id is 20 bit, I guess we could do 12bit queue_id and 8bit port as a flow. >Comparing this format with existing RSS hash based method, are we >saying that all mbufs received in a rx burst are part of same flow when >vectorization is used? Yes, the hard way to do this is to use a hash table and treating each=20 mbuf having an unique flow. > >> + vector_data->event =3D (queue_info->event & ~0xFFFFF) | >flow_id; >> +} >> + >> static void >> rxa_sw_del(struct rte_event_eth_rx_adapter *rx_adapter, >> struct eth_device_info *dev_info, >> @@ -1741,6 +1907,44 @@ rxa_add_queue(struct >rte_event_eth_rx_adapter *rx_adapter, >> } >> } >> >> +static void >> +rxa_sw_event_vector_configure( >> + struct rte_event_eth_rx_adapter *rx_adapter, uint16_t >eth_dev_id, >> + int rx_queue_id, >> + const struct rte_event_eth_rx_adapter_event_vector_config >*config) >> +{ >> + struct eth_device_info *dev_info =3D &rx_adapter- >>eth_devices[eth_dev_id]; >> + struct eth_rx_queue_info *queue_info; >> + struct rte_event *qi_ev; >> + >> + if (rx_queue_id =3D=3D -1) { >> + uint16_t nb_rx_queues; >> + uint16_t i; >> + >> + nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; >> + for (i =3D 0; i < nb_rx_queues; i++) >> + rxa_sw_event_vector_configure(rx_adapter, >eth_dev_id, i, >> + config); >> + return; >> + } >> + >> + queue_info =3D &dev_info->rx_queue[rx_queue_id]; >> + qi_ev =3D (struct rte_event *)&queue_info->event; >> + queue_info->ena_vector =3D 1; >> + qi_ev->event_type =3D >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; >> + rxa_set_vector_data(queue_info, config->vector_sz, >> + config->vector_timeout_ns, config- >>vector_mp, >> + rx_queue_id, dev_info->dev->data->port_id); >> + rx_adapter->ena_vector =3D 1; >> + rx_adapter->vector_tmo_ticks =3D >> + rx_adapter->vector_tmo_ticks ? >> + RTE_MIN(config->vector_timeout_ns << 1, >> + rx_adapter->vector_tmo_ticks) : >> + config->vector_timeout_ns << 1; >> + rx_adapter->prev_expiry_ts =3D 0; >> + TAILQ_INIT(&rx_adapter->vector_list); >> +} >> + >> static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, >> uint16_t eth_dev_id, >> int rx_queue_id, >> @@ -2081,6 +2285,15 @@ >rte_event_eth_rx_adapter_queue_add(uint8_t id, >> return -EINVAL; >> } >> >> + if ((cap & >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && >> + (queue_conf->rx_queue_flags & >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { >> + RTE_EDEV_LOG_ERR("Event vectorization is not >supported," >> + " eth port: %" PRIu16 " adapter id: %" >PRIu8, >> + eth_dev_id, id); >> + return -EINVAL; >> + } >> + >> if ((cap & >RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && >> (rx_queue_id !=3D -1)) { >> RTE_EDEV_LOG_ERR("Rx queues can only be connected >to single " >> @@ -2143,6 +2356,17 @@ >rte_event_eth_rx_adapter_queue_add(uint8_t id, >> return 0; >> } >> >> +static int >> +rxa_sw_vector_limits(struct >rte_event_eth_rx_adapter_vector_limits *limits) >> +{ >> + limits->max_sz =3D MAX_VECTOR_SIZE; >> + limits->min_sz =3D MIN_VECTOR_SIZE; >> + limits->max_timeout_ns =3D MAX_VECTOR_NS; >> + limits->min_timeout_ns =3D MIN_VECTOR_NS; >> + >> + return 0; >> +} >> + >> int >> rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t >eth_dev_id, >> int32_t rx_queue_id) >> @@ -2333,7 +2557,8 @@ >rte_event_eth_rx_adapter_queue_event_vector_config( >> ret =3D dev->dev_ops- >>eth_rx_adapter_event_vector_config( >> dev, &rte_eth_devices[eth_dev_id], >rx_queue_id, config); >> } else { >> - ret =3D -ENOTSUP; >> + rxa_sw_event_vector_configure(rx_adapter, >eth_dev_id, >> + rx_queue_id, config); >> } >> >> return ret; >> @@ -2371,7 +2596,7 @@ >rte_event_eth_rx_adapter_vector_limits_get( >> ret =3D dev->dev_ops- >>eth_rx_adapter_vector_limits_get( >> dev, &rte_eth_devices[eth_port_id], limits); >> } else { >> - ret =3D -ENOTSUP; >> + ret =3D rxa_sw_vector_limits(limits); >> } >> >> return ret; >> diff --git a/lib/librte_eventdev/rte_eventdev.c >b/lib/librte_eventdev/rte_eventdev.c >> index f95edc075..254a31b1f 100644 >> --- a/lib/librte_eventdev/rte_eventdev.c >> +++ b/lib/librte_eventdev/rte_eventdev.c >> @@ -122,7 +122,11 @@ rte_event_eth_rx_adapter_caps_get(uint8_t >dev_id, uint16_t eth_port_id, >> >> if (caps =3D=3D NULL) >> return -EINVAL; >> - *caps =3D 0; >> + >> + if (dev->dev_ops->eth_rx_adapter_caps_get =3D=3D NULL) >> + *caps =3D RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; >> + else >> + *caps =3D 0; > >Any reason why we had to set default caps value? I am thinking if sw >event device is used, it would set it anyways. > There are multiple sw event devices which don't implement caps_get=20 function, this changes solves that. >> >> return dev->dev_ops->eth_rx_adapter_caps_get ? >> (*dev->dev_ops- >>eth_rx_adapter_caps_get)(dev, >> -- >> 2.17.1