From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61821A0A02; Fri, 26 Mar 2021 10:00:24 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 227FD40686; Fri, 26 Mar 2021 10:00:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 06D0F40685 for ; Fri, 26 Mar 2021 10:00:21 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12Q8u3DM021819; Fri, 26 Mar 2021 02:00:21 -0700 Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2102.outbound.protection.outlook.com [104.47.70.102]) by mx0a-0016f401.pphosted.com with ESMTP id 37h11jj020-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 26 Mar 2021 02:00:20 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OiCYBZx4zMcvFcUjNN2xZrOyeYfFNJz3gvt31dlfQf+1XiDtHvxSS6psTchT3upiGcULnftDF+4jqwm9d44wcaxDchLCk3JFFl/RpeTCKJQ9lWPNFehDohSPKSI7X27H0pQmq1cdZpOarQ57Cld8AE68cl3LGCkCazXyo3gbf83dNXpBw3VDsusFTiZP/ZXGMj2IcNfOmo8yocnytXIteGnDcPux1AOnQaTLeJuK02VuHeEOx+eZLwdFl/UTNnPhufK86P8Vs4tC7PG9LRoqZDAxZrZPsZXEGcKVOf9styW5Ti80pbsa65SnF7Ft+l1n3LLpvMO7gKHL05EDp6jidQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZqT3xpQIxEMA3Gz1H3wuEN63M5O3MTYoIsIzeHbku3U=; b=BXdeI3CxubDwo/NKFoGl8BW0QmKPwZdgXSlVjwioYqlsW3qUkALqKiDivcDI11tXgj8Zahm1Ym2fWOJ+vQKB0bCLo1KpeAm+vAx4yDk79ljlr75KipbjW3M0JOvu8HsKBhP5lTEHMZaogSfN7SxDJx3AEOvT5QnnwqUmYn7w36n3BVEF09B8KEQJCPSjVXl4ohUPwVug3PGME+ErvABCqdR4aEz9E8wFG4gRAvrG8L0xIHyJb1uCa1MxCyeFIvqhqROXXJTpYRJ6b5vLeoGST+dmopXA7GwKVkLbPk5dEY8Xov1si15HbhVSIK3ZaZM39DAvXv4g98DSsya3YWWIAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZqT3xpQIxEMA3Gz1H3wuEN63M5O3MTYoIsIzeHbku3U=; b=RpNYCETga0xzFTN+R8AvfcCgR1cJzopsVk69gWNBAinPMBeelUbu8kMcZnRQ22Kl9GymSfNqjh6B5yGMohc+33RTVzxg/g+Q8DayOoNENlsKO0NVQW9b2QKc9Eh8gh+V7vgmgxQDtON91UVJlrvvkKIsCpdYMjhSvWm0AocHLqk= Received: from PH0PR18MB4086.namprd18.prod.outlook.com (2603:10b6:510:3::9) by PH0PR18MB3832.namprd18.prod.outlook.com (2603:10b6:510:1::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.24; Fri, 26 Mar 2021 09:00:18 +0000 Received: from PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d]) by PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d%4]) with mapi id 15.20.3955.024; Fri, 26 Mar 2021 09:00:18 +0000 From: Pavan Nikhilesh Bhagavatula To: "Jayatheerthan, Jay" , Jerin Jacob Kollanukkaran , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support Thread-Index: AQHXIGtryqJvuwky8UW0Hy2pQLdSM6qUhMcAgAAajTCAATGsAIAAIjjA Date: Fri, 26 Mar 2021 09:00:17 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-5-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=marvell.com; x-originating-ip: [49.37.166.152] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 1f2675ab-b4b2-44c7-c5ed-08d8f035943a x-ms-traffictypediagnostic: PH0PR18MB3832: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8273; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Fbwia6qL/GPYFpzU+Aygn95XADPJX0M8iG4w570AIyVZywFx91qAA/scpkrgjE1zt1htuAK6WXcSLTf/sU13W7f4UVOpNyeFUld86VKZsasfDMGjgWRw+1/M+Oq3gnUMqLbwwFT9pTePSattyTGgX6nnqW7izjEQAKk6A7iyUDqH6Uu9XvWI42gRPKN0A0br+jyXSweg0oEJucU6gLzViSp/g2DlzMxvHZIgyi5sB0a2H8fmXB60QlXohOv4Kjpuiv7m/pkm2D8CP2FAfnnTJ4P6/miRXOlfn49nGFRsv1GN4RUjuV66hTof8MuTRf5aFdXrceUUCMQItTqeH41OkN9Jrn+3hxInv1eRfAH+pYuu8E9gDnWVopKS9QlbRq/I3tfqyMkJWneCOkAkIQw7cXdkWSq+pItrK7mT6ALId+IkRyd3KJT3Kfndys0xlld5utrA89rWwaxipZxgRjdn8TnIHMn5v/TPhsXf7T4xcfnxMzdahmaleJgaqSOa6IV9PsyvVzzpDZbXEI8lxyLcQPOb06j26Q7v0hrexLqrm/1x3X0+oG9Pznlk0iaxK5ZHc+8tYOFZ7Jy1Kuslhan8o+DfjP0qldct+ZesoVkcARwdK0o8zCz7z/Gq9H2l2wsVYmX1675czXmyJIrkL9+JZFynS0StkXJLYxvKOfWYiq3NHgNmzNK7EYFof/QU0qCf x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR18MB4086.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(366004)(346002)(39860400002)(396003)(376002)(53546011)(8936002)(6506007)(38100700001)(5660300002)(7696005)(30864003)(66446008)(66476007)(52536014)(66946007)(2906002)(8676002)(76116006)(71200400001)(66556008)(64756008)(33656002)(83380400001)(921005)(110136005)(478600001)(186003)(55016002)(26005)(86362001)(4326008)(9686003)(316002)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?MfQyiEwBkSyOTQJkcqFQaVzIb6TkGpo9K+Sb9zb1vSixbUZ8AJV7jL9Jz84Y?= =?us-ascii?Q?eFeN3Q04BBTFOg0nTXJJ91jTrEGTfyzytL0woAHRSYliMzmh82ZJs3vPSP0D?= =?us-ascii?Q?xOcRXcwYFwdTZtNte1oZjLGJwnMtnXmOMDDTOXtikb+slsCp2cVxTcT1WyTW?= =?us-ascii?Q?or7z2Zq2UT3M2sDGa3RcXAZerIBALFFaAZmuBu8QNAACVT2GGDdB60v2omuR?= =?us-ascii?Q?v6QV96+CZST9i177jHvDcLkmBlU6O2p8bxC5WA8vBUn5OhrAGroy3LxwdZXg?= =?us-ascii?Q?FhDry6W7iFpvdktQNQLi1ckLlEFU3rRupemVl2gHhD5Vyu5uEMwHWM9QPvDJ?= =?us-ascii?Q?Wom51JizBGodWUpcrSUh07uracVBqDKcSU6SCiRG7PYflDh9+SllevniDuac?= =?us-ascii?Q?66CE1OE1cas4TtjKd2M4r2rxWrE+ib3YZ2bPv4TCwwGcJDr70meYmsVpLckg?= =?us-ascii?Q?rio8GiYQjuHNaCdRp6STfuRyeb5gDL4GA2tTFxNfBYc6o0D+BtQPJarUnF5R?= =?us-ascii?Q?Fp84C5Me2Z02VsD0vKMsOA+UepGlOfz/EbrHlbyJSqmJRXLGME2UytzrfJ/v?= =?us-ascii?Q?vc5/7P0nliXo1XNCgkPzm67+af1OmoSadxd5zMkIFuXAji2XRpI+swU0VBYg?= =?us-ascii?Q?oCgCv8gid1f54yv44q2KpWIvFYv5YmrzQQg4TfLSClwCopkVQy0JeWGbziEx?= =?us-ascii?Q?r7UhWpO/J0FJPtrsTAeBeOqnhlP0VgznqJY9MRxz54z10Lufl+l53imAh/f8?= =?us-ascii?Q?BviMHWPFQx2wfiZFb9rAw7kob473n8DbifGLAccI319DPXaP93Z/X5tW3AnV?= =?us-ascii?Q?XGlMjKKTu4biUzshG9MEAlXCzfJ5LAEpAffDmQfAWgGayxJGFqWV94RXcV4O?= =?us-ascii?Q?aK05MlH8vniCSpmAeZeSLh94vErhkDNuNmRzYylNuG7xtmDmqqYO2oCTKHzw?= =?us-ascii?Q?3LhHFSRvV2Z2bNvr7ad80dOg7LzwrEnLT4sDH2UDubtfNnKqb64W12crCUXQ?= =?us-ascii?Q?X718OgUKnjX/i6mEg6XSXd1gNIbDoHAo+9Q3tXpjhlpjlL/bNihMakUwYdE2?= =?us-ascii?Q?8AgSr7ildoAwBbvWUMQ5UiNESTdiBwD+S+jQGqMdP2qs5YxYRveCxApjYBNy?= =?us-ascii?Q?C/9Q5eW/8M0W80VrV0Vr3XfRKqxcy0Yy7D4yuV8V5sGOq0WdUXRxKQAb7iGU?= =?us-ascii?Q?yM6r8R3QyMYonMved4DqDSjjH+gwY1EEzvRV8C8bh63W5LguZWwzFSVN5FiK?= =?us-ascii?Q?cGSwUjBshS+am8dDx0iSFzJ77Odldvqy5p7rtJWGrfDHmrdNHglYoamb2KZD?= =?us-ascii?Q?0wlOIP/Tz2kL//kHUsY3XrFb?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: marvell.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR18MB4086.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1f2675ab-b4b2-44c7-c5ed-08d8f035943a X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Mar 2021 09:00:18.2383 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: dTX9xkooKWgdch+gcp+xGeMfQEACYPEtj+el+LAlc2IQ5kBo9JxzEYTUf1JB4eESv5t1EzJvDRwBudAUZ1gpbKMPUN8B1u3vGks97JepR34= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR18MB3832 X-Proofpoint-GUID: 4m6h7Y2e1YlezpAwP2XLBJS3G2nElYl_ X-Proofpoint-ORIG-GUID: 4m6h7Y2e1YlezpAwP2XLBJS3G2nElYl_ X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-26_02:2021-03-26, 2021-03-26 signatures=0 Subject: Re: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >> From: Pavan Nikhilesh Bhagavatula >> Sent: Thursday, March 25, 2021 6:44 PM >> To: Jayatheerthan, Jay ; Jerin Jacob >Kollanukkaran ; Carrillo, Erik G >> ; Gujjar, Abhinandan S >; McDaniel, Timothy >> ; hemant.agrawal@nxp.com; Van >Haaren, Harry ; mattias.ronnblom >> ; Ma, Liang J > >> Cc: dev@dpdk.org >> Subject: RE: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter >event vector support >> >> >> >> >-----Original Message----- >> >From: Jayatheerthan, Jay >> >Sent: Thursday, March 25, 2021 4:07 PM >> >To: Pavan Nikhilesh Bhagavatula ; >Jerin >> >Jacob Kollanukkaran ; Carrillo, Erik G >> >; Gujjar, Abhinandan S >> >; McDaniel, Timothy >> >; hemant.agrawal@nxp.com; Van >> >Haaren, Harry ; mattias.ronnblom >> >; Ma, Liang J >> > >> >Cc: dev@dpdk.org >> >Subject: [EXT] RE: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx >adapter >> >event vector support >> > >> >External Email >> > >> >---------------------------------------------------------------------- >> >> -----Original Message----- >> >> From: pbhagavatula@marvell.com >> >> Sent: Wednesday, March 24, 2021 10:35 AM >> >> To: jerinj@marvell.com; Jayatheerthan, Jay >> >; Carrillo, Erik G >> >; Gujjar, >> >> Abhinandan S ; McDaniel, Timothy >> >; hemant.agrawal@nxp.com; Van >> >> Haaren, Harry ; mattias.ronnblom >> >; Ma, Liang J >> >> >> >> Cc: dev@dpdk.org; Pavan Nikhilesh >> >> Subject: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter >event >> >vector support >> >> >> >> From: Pavan Nikhilesh >> >> >> >> Add event vector support for event eth Rx adapter, the >> >implementation >> >> creates vector flows based on port and queue identifier of the >> >received >> >> mbufs. >> >> >> >> Signed-off-by: Pavan Nikhilesh >> >> --- >> >> lib/librte_eventdev/eventdev_pmd.h | 7 +- >> >> .../rte_event_eth_rx_adapter.c | 257 ++++++++++++++++= - >- >> >> lib/librte_eventdev/rte_eventdev.c | 6 +- >> >> 3 files changed, 250 insertions(+), 20 deletions(-) >> >> >> >> diff --git a/lib/librte_eventdev/eventdev_pmd.h >> >b/lib/librte_eventdev/eventdev_pmd.h >> >> index 9297f1433..0f724ac85 100644 >> >> --- a/lib/librte_eventdev/eventdev_pmd.h >> >> +++ b/lib/librte_eventdev/eventdev_pmd.h >> >> @@ -69,9 +69,10 @@ extern "C" { >> >> } \ >> >> } while (0) >> >> >> >> -#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \ >> >> - >> > ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | >> >\ >> >> - >> > (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ)) >> >> +#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP >> >\ >> >> + ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | >> >\ >> >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) | >> >\ >> >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) >> >> >> >> #define RTE_EVENT_CRYPTO_ADAPTER_SW_CAP \ >> >> >> > RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA >> >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >b/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >> index ac8ba5bf0..c71990078 100644 >> >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >> @@ -26,6 +26,10 @@ >> >> #define BATCH_SIZE 32 >> >> #define BLOCK_CNT_THRESHOLD 10 >> >> #define ETH_EVENT_BUFFER_SIZE (4*BATCH_SIZE) >> >> +#define MAX_VECTOR_SIZE 1024 >> >> +#define MIN_VECTOR_SIZE 4 >> >> +#define MAX_VECTOR_NS 1E9 >> >> +#define MIN_VECTOR_NS 1E5 >> >> >> >> #define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32 >> >> #define ETH_RX_ADAPTER_MEM_NAME_LEN 32 >> >> @@ -59,6 +63,20 @@ struct eth_rx_poll_entry { >> >> uint16_t eth_rx_qid; >> >> }; >> >> >> >> +struct eth_rx_vector_data { >> >> + TAILQ_ENTRY(eth_rx_vector_data) next; >> >> + uint16_t port; >> >> + uint16_t queue; >> >> + uint16_t max_vector_count; >> >> + uint64_t event; >> >> + uint64_t ts; >> >> + uint64_t vector_timeout_ticks; >> >> + struct rte_mempool *vector_pool; >> >> + struct rte_event_vector *vector_ev; >> >> +} __rte_cache_aligned; >> >> + >> >> +TAILQ_HEAD(eth_rx_vector_data_list, eth_rx_vector_data); >> >> + >> >> /* Instance per adapter */ >> >> struct rte_eth_event_enqueue_buffer { >> >> /* Count of events in this buffer */ >> >> @@ -92,6 +110,14 @@ struct rte_event_eth_rx_adapter { >> >> uint32_t wrr_pos; >> >> /* Event burst buffer */ >> >> struct rte_eth_event_enqueue_buffer event_enqueue_buffer; >> >> + /* Vector enable flag */ >> >> + uint8_t ena_vector; >> >> + /* Timestamp of previous vector expiry list traversal */ >> >> + uint64_t prev_expiry_ts; >> >> + /* Minimum ticks to wait before traversing expiry list */ >> >> + uint64_t vector_tmo_ticks; >> >> + /* vector list */ >> >> + struct eth_rx_vector_data_list vector_list; >> >> /* Per adapter stats */ >> >> struct rte_event_eth_rx_adapter_stats stats; >> >> /* Block count, counts up to BLOCK_CNT_THRESHOLD */ >> >> @@ -198,9 +224,11 @@ struct eth_device_info { >> >> struct eth_rx_queue_info { >> >> int queue_enabled; /* True if added */ >> >> int intr_enabled; >> >> + uint8_t ena_vector; >> >> uint16_t wt; /* Polling weight */ >> >> uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else >> >0 */ >> >> uint64_t event; >> >> + struct eth_rx_vector_data vector_data; >> >> }; >> >> >> >> static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; >> >> @@ -722,6 +750,9 @@ rxa_flush_event_buffer(struct >> >rte_event_eth_rx_adapter *rx_adapter) >> >> &rx_adapter->event_enqueue_buffer; >> >> struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter- >> >>stats; >> >> >> >> + if (!buf->count) >> >> + return 0; >> >> + >> >> uint16_t n =3D rte_event_enqueue_new_burst(rx_adapter- >> >>eventdev_id, >> >> rx_adapter->event_port_id, >> >> buf->events, >> >> @@ -742,6 +773,72 @@ rxa_flush_event_buffer(struct >> >rte_event_eth_rx_adapter *rx_adapter) >> >> return n; >> >> } >> >> >> >> +static inline uint16_t >> >> +rxa_create_event_vector(struct rte_event_eth_rx_adapter >> >*rx_adapter, >> >> + struct eth_rx_queue_info *queue_info, >> >> + struct rte_eth_event_enqueue_buffer *buf, >> >> + struct rte_mbuf **mbufs, uint16_t num) >> >> +{ >> >> + struct rte_event *ev =3D &buf->events[buf->count]; >> >> + struct eth_rx_vector_data *vec; >> >> + uint16_t filled, space, sz; >> >> + >> >> + filled =3D 0; >> >> + vec =3D &queue_info->vector_data; >> >> + while (num) { >> >> + if (vec->vector_ev =3D=3D NULL) { >> >> + if (rte_mempool_get(vec->vector_pool, >> >> + (void **)&vec->vector_ev) < >> >0) { >> >> + rte_pktmbuf_free_bulk(mbufs, num); >> >> + return 0; >> >> + } >> >> + vec->vector_ev->nb_elem =3D 0; >> >> + vec->vector_ev->port =3D vec->port; >> >> + vec->vector_ev->queue =3D vec->queue; >> >> + vec->vector_ev->attr_valid =3D true; >> >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, >> >vec, next); >> >> + } else if (vec->vector_ev->nb_elem =3D=3D vec- >> >>max_vector_count) { >> > >> >Is there a case where nb_elem > max_vector_count as we >accumulate >> >sz to it ? >> >> I don't think so, that would overflow the vector event. >> >> > >> >> + /* Event ready. */ >> >> + ev->event =3D vec->event; >> >> + ev->vec =3D vec->vector_ev; >> >> + ev++; >> >> + filled++; >> >> + vec->vector_ev =3D NULL; >> >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, >> >next); >> >> + if (rte_mempool_get(vec->vector_pool, >> >> + (void **)&vec->vector_ev) < >> >0) { >> >> + rte_pktmbuf_free_bulk(mbufs, num); >> >> + return 0; >> >> + } >> >> + vec->vector_ev->nb_elem =3D 0; >> >> + vec->vector_ev->port =3D vec->port; >> >> + vec->vector_ev->queue =3D vec->queue; >> >> + vec->vector_ev->attr_valid =3D true; >> >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, >> >vec, next); >> >> + } >> >> + >> >> + space =3D vec->max_vector_count - vec->vector_ev- >> >>nb_elem; >> >> + sz =3D num > space ? space : num; >> >> + memcpy(vec->vector_ev->mbufs + vec->vector_ev- >> >>nb_elem, mbufs, >> >> + sizeof(void *) * sz); >> >> + vec->vector_ev->nb_elem +=3D sz; >> >> + num -=3D sz; >> >> + mbufs +=3D sz; >> >> + vec->ts =3D rte_rdtsc(); >> >> + } >> >> + >> >> + if (vec->vector_ev->nb_elem =3D=3D vec->max_vector_count) { >> > >> >Same here. >> > >> >> + ev->event =3D vec->event; >> >> + ev->vec =3D vec->vector_ev; >> >> + ev++; >> >> + filled++; >> >> + vec->vector_ev =3D NULL; >> >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); >> >> + } >> >> + >> >> + return filled; >> >> +} >> > >> >I am seeing more than one repeating code chunks in this function. >> >Perhaps, you can give it a try to not repeat. We can drop if its >> >performance affecting. >> >> I will try to move them to inline functions and test. >> >> > >> >> + >> >> static inline void >> >> rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, >> >> uint16_t eth_dev_id, >> >> @@ -770,25 +867,30 @@ rxa_buffer_mbufs(struct >> >rte_event_eth_rx_adapter *rx_adapter, >> >> rss_mask =3D ~(((m->ol_flags & PKT_RX_RSS_HASH) !=3D 0) - 1); >> >> do_rss =3D !rss_mask && !eth_rx_queue_info->flow_id_mask; >> > >> >The RSS related code is executed for vector case as well. Can this be >> >moved inside ena_vector if condition ? >> >> RSS is used to generate the event flowid, in vector case the flow id >> Will be a combination of port and queue id. >> The idea is that flows having the same RSS LSB will end up in the same >> queue. >> > >I meant to say, rss_mask and do_rss are used only when ena_vector is >false. Could be moved inside the appropriate condition ? > Ah! I see I will move them inside the conditional. >> > >> >> >> >> - for (i =3D 0; i < num; i++) { >> >> - m =3D mbufs[i]; >> >> - >> >> - rss =3D do_rss ? >> >> - rxa_do_softrss(m, rx_adapter->rss_key_be) : >> >> - m->hash.rss; >> >> - ev->event =3D event; >> >> - ev->flow_id =3D (rss & ~flow_id_mask) | >> >> - (ev->flow_id & flow_id_mask); >> >> - ev->mbuf =3D m; >> >> - ev++; >> >> + if (!eth_rx_queue_info->ena_vector) { >> >> + for (i =3D 0; i < num; i++) { >> >> + m =3D mbufs[i]; >> >> + >> >> + rss =3D do_rss ? rxa_do_softrss(m, rx_adapter- >> >>rss_key_be) >> >> + : m->hash.rss; >> >> + ev->event =3D event; >> >> + ev->flow_id =3D (rss & ~flow_id_mask) | >> >> + (ev->flow_id & flow_id_mask); >> >> + ev->mbuf =3D m; >> >> + ev++; >> >> + } >> >> + } else { >> >> + num =3D rxa_create_event_vector(rx_adapter, >> >eth_rx_queue_info, >> >> + buf, mbufs, num); >> >> } >> >> >> >> - if (dev_info->cb_fn) { >> >> + if (num && dev_info->cb_fn) { >> >> >> >> dropped =3D 0; >> >> nb_cb =3D dev_info->cb_fn(eth_dev_id, rx_queue_id, >> >> - ETH_EVENT_BUFFER_SIZE, buf- >> >>count, ev, >> >> - num, dev_info->cb_arg, >> >&dropped); >> >> + ETH_EVENT_BUFFER_SIZE, buf- >> >>count, >> >> + &buf->events[buf->count], >> >num, >> >> + dev_info->cb_arg, &dropped); >> > >> >Before this patch, we pass ev which is &buf->events[buf->count] + >num >> >as fifth param when calling cb_fn. Now, we are passing &buf- >> >>events[buf->count] for non-vector case. Do you see this as an >issue? >> > >> >> The callback function takes in the array newly formed events i.e. we >need >> to pass the start of array and the count. >> >> the previous code had a bug where it passes the end of the event list. > >ok, that makes sense. > >> >> >Also, for vector case would it make sense to do pass &buf- >>events[buf- >> >>count] + num ? >> > >> >> if (unlikely(nb_cb > num)) >> >> RTE_EDEV_LOG_ERR("Rx CB returned %d (> %d) >> >events", >> >> nb_cb, num); >> >> @@ -1124,6 +1226,30 @@ rxa_poll(struct >rte_event_eth_rx_adapter >> >*rx_adapter) >> >> return nb_rx; >> >> } >> >> >> >> +static void >> >> +rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) >> >> +{ >> >> + struct rte_event_eth_rx_adapter *rx_adapter =3D arg; >> >> + struct rte_eth_event_enqueue_buffer *buf =3D >> >> + &rx_adapter->event_enqueue_buffer; >> >> + struct rte_event *ev; >> >> + >> >> + if (buf->count) >> >> + rxa_flush_event_buffer(rx_adapter); >> >> + >> >> + if (vec->vector_ev->nb_elem =3D=3D 0) >> >> + return; >> >> + ev =3D &buf->events[buf->count]; >> >> + >> >> + /* Event ready. */ >> >> + ev->event =3D vec->event; >> >> + ev->vec =3D vec->vector_ev; >> >> + buf->count++; >> >> + >> >> + vec->vector_ev =3D NULL; >> >> + vec->ts =3D 0; >> >> +} >> >> + >> >> static int >> >> rxa_service_func(void *args) >> >> { >> >> @@ -1137,6 +1263,24 @@ rxa_service_func(void *args) >> >> return 0; >> >> } >> >> >> >> + if (rx_adapter->ena_vector) { >> >> + if ((rte_rdtsc() - rx_adapter->prev_expiry_ts) >=3D >> >> + rx_adapter->vector_tmo_ticks) { >> >> + struct eth_rx_vector_data *vec; >> >> + >> >> + TAILQ_FOREACH(vec, &rx_adapter->vector_list, >> >next) { >> >> + uint64_t elapsed_time =3D rte_rdtsc() - >> >vec->ts; >> >> + >> >> + if (elapsed_time >=3D vec- >> >>vector_timeout_ticks) { >> >> + rxa_vector_expire(vec, >> >rx_adapter); >> >> + TAILQ_REMOVE(&rx_adapter- >> >>vector_list, >> >> + vec, next); >> >> + } >> >> + } >> >> + rx_adapter->prev_expiry_ts =3D rte_rdtsc(); >> >> + } >> >> + } >> >> + >> >> stats =3D &rx_adapter->stats; >> >> stats->rx_packets +=3D rxa_intr_ring_dequeue(rx_adapter); >> >> stats->rx_packets +=3D rxa_poll(rx_adapter); >> >> @@ -1640,6 +1784,28 @@ rxa_update_queue(struct >> >rte_event_eth_rx_adapter *rx_adapter, >> >> } >> >> } >> >> >> >> +static void >> >> +rxa_set_vector_data(struct eth_rx_queue_info *queue_info, >> >uint16_t vector_count, >> >> + uint64_t vector_ns, struct rte_mempool *mp, int32_t >> >qid, >> >> + uint16_t port_id) >> >> +{ >> >> +#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) >> >> + struct eth_rx_vector_data *vector_data; >> >> + uint32_t flow_id; >> >> + >> >> + vector_data =3D &queue_info->vector_data; >> >> + vector_data->max_vector_count =3D vector_count; >> >> + vector_data->port =3D port_id; >> >> + vector_data->queue =3D qid; >> >> + vector_data->vector_pool =3D mp; >> >> + vector_data->vector_timeout_ticks =3D >> >> + NSEC2TICK(vector_ns, rte_get_timer_hz()); >> >> + vector_data->ts =3D 0; >> >> + flow_id =3D queue_info->event & 0xFFFFF; >> >> + flow_id =3D flow_id =3D=3D 0 ? (qid & 0xFF) | (port_id & 0xFFFF) : >> >flow_id; >> > >> >Maybe I am missing something here. Looking at the code it looks like >> >qid and port_id may overlap. For e.g., if qid =3D 0x10 and port_id =3D >0x11, >> >flow_id would end up being 0x11. Is this the expectation? Also, it >may >> >be useful to document flow_id format. >> >> The flow_id is 20 bit, I guess we could do 12bit queue_id and 8bit port >> as a flow. > >This sounds reasonable to me. It would be useful to have the flow_id >format and how it is used for vectorization in Rx/Tx adapter >documentation. This is only applicable to the SW Rx adapter implementation a HW implementa= tion might have its own way of implementing the flow aggregation. There is no documentation specific to SW Rx adapter, I will add it to the c= ommit log. > >> >> >Comparing this format with existing RSS hash based method, are we >> >saying that all mbufs received in a rx burst are part of same flow >when >> >vectorization is used? >> >> Yes, the hard way to do this is to use a hash table and treating each >> mbuf having an unique flow. >> >> > >> >> + vector_data->event =3D (queue_info->event & ~0xFFFFF) | >> >flow_id; >> >> +} >> >> + >> >> static void >> >> rxa_sw_del(struct rte_event_eth_rx_adapter *rx_adapter, >> >> struct eth_device_info *dev_info, >> >> @@ -1741,6 +1907,44 @@ rxa_add_queue(struct >> >rte_event_eth_rx_adapter *rx_adapter, >> >> } >> >> } >> >> >> >> +static void >> >> +rxa_sw_event_vector_configure( >> >> + struct rte_event_eth_rx_adapter *rx_adapter, uint16_t >> >eth_dev_id, >> >> + int rx_queue_id, >> >> + const struct rte_event_eth_rx_adapter_event_vector_config >> >*config) >> >> +{ >> >> + struct eth_device_info *dev_info =3D &rx_adapter- >> >>eth_devices[eth_dev_id]; >> >> + struct eth_rx_queue_info *queue_info; >> >> + struct rte_event *qi_ev; >> >> + >> >> + if (rx_queue_id =3D=3D -1) { >> >> + uint16_t nb_rx_queues; >> >> + uint16_t i; >> >> + >> >> + nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; >> >> + for (i =3D 0; i < nb_rx_queues; i++) >> >> + rxa_sw_event_vector_configure(rx_adapter, >> >eth_dev_id, i, >> >> + config); >> >> + return; >> >> + } >> >> + >> >> + queue_info =3D &dev_info->rx_queue[rx_queue_id]; >> >> + qi_ev =3D (struct rte_event *)&queue_info->event; >> >> + queue_info->ena_vector =3D 1; >> >> + qi_ev->event_type =3D >> >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; >> >> + rxa_set_vector_data(queue_info, config->vector_sz, >> >> + config->vector_timeout_ns, config- >> >>vector_mp, >> >> + rx_queue_id, dev_info->dev->data->port_id); >> >> + rx_adapter->ena_vector =3D 1; >> >> + rx_adapter->vector_tmo_ticks =3D >> >> + rx_adapter->vector_tmo_ticks ? >> >> + RTE_MIN(config->vector_timeout_ns << 1, >> >> + rx_adapter->vector_tmo_ticks) : >> >> + config->vector_timeout_ns << 1; >> >> + rx_adapter->prev_expiry_ts =3D 0; >> >> + TAILQ_INIT(&rx_adapter->vector_list); >> >> +} >> >> + >> >> static int rxa_sw_add(struct rte_event_eth_rx_adapter >*rx_adapter, >> >> uint16_t eth_dev_id, >> >> int rx_queue_id, >> >> @@ -2081,6 +2285,15 @@ >> >rte_event_eth_rx_adapter_queue_add(uint8_t id, >> >> return -EINVAL; >> >> } >> >> >> >> + if ((cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && >> >> + (queue_conf->rx_queue_flags & >> >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { >> >> + RTE_EDEV_LOG_ERR("Event vectorization is not >> >supported," >> >> + " eth port: %" PRIu16 " adapter id: %" >> >PRIu8, >> >> + eth_dev_id, id); >> >> + return -EINVAL; >> >> + } >> >> + >> >> if ((cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && >> >> (rx_queue_id !=3D -1)) { >> >> RTE_EDEV_LOG_ERR("Rx queues can only be connected >> >to single " >> >> @@ -2143,6 +2356,17 @@ >> >rte_event_eth_rx_adapter_queue_add(uint8_t id, >> >> return 0; >> >> } >> >> >> >> +static int >> >> +rxa_sw_vector_limits(struct >> >rte_event_eth_rx_adapter_vector_limits *limits) >> >> +{ >> >> + limits->max_sz =3D MAX_VECTOR_SIZE; >> >> + limits->min_sz =3D MIN_VECTOR_SIZE; >> >> + limits->max_timeout_ns =3D MAX_VECTOR_NS; >> >> + limits->min_timeout_ns =3D MIN_VECTOR_NS; >> >> + >> >> + return 0; >> >> +} >> >> + >> >> int >> >> rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t >> >eth_dev_id, >> >> int32_t rx_queue_id) >> >> @@ -2333,7 +2557,8 @@ >> >rte_event_eth_rx_adapter_queue_event_vector_config( >> >> ret =3D dev->dev_ops- >> >>eth_rx_adapter_event_vector_config( >> >> dev, &rte_eth_devices[eth_dev_id], >> >rx_queue_id, config); >> >> } else { >> >> - ret =3D -ENOTSUP; >> >> + rxa_sw_event_vector_configure(rx_adapter, >> >eth_dev_id, >> >> + rx_queue_id, config); >> >> } >> >> >> >> return ret; >> >> @@ -2371,7 +2596,7 @@ >> >rte_event_eth_rx_adapter_vector_limits_get( >> >> ret =3D dev->dev_ops- >> >>eth_rx_adapter_vector_limits_get( >> >> dev, &rte_eth_devices[eth_port_id], limits); >> >> } else { >> >> - ret =3D -ENOTSUP; >> >> + ret =3D rxa_sw_vector_limits(limits); >> >> } >> >> >> >> return ret; >> >> diff --git a/lib/librte_eventdev/rte_eventdev.c >> >b/lib/librte_eventdev/rte_eventdev.c >> >> index f95edc075..254a31b1f 100644 >> >> --- a/lib/librte_eventdev/rte_eventdev.c >> >> +++ b/lib/librte_eventdev/rte_eventdev.c >> >> @@ -122,7 +122,11 @@ >rte_event_eth_rx_adapter_caps_get(uint8_t >> >dev_id, uint16_t eth_port_id, >> >> >> >> if (caps =3D=3D NULL) >> >> return -EINVAL; >> >> - *caps =3D 0; >> >> + >> >> + if (dev->dev_ops->eth_rx_adapter_caps_get =3D=3D NULL) >> >> + *caps =3D RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; >> >> + else >> >> + *caps =3D 0; >> > >> >Any reason why we had to set default caps value? I am thinking if sw >> >event device is used, it would set it anyways. >> > >> >> There are multiple sw event devices which don't implement caps_get >> function, this changes solves that. >> >> >> >> >> return dev->dev_ops->eth_rx_adapter_caps_get ? >> >> (*dev->dev_ops- >> >>eth_rx_adapter_caps_get)(dev, >> >> -- >> >> 2.17.1