From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D32FA00C3; Fri, 13 May 2022 05:27:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 326EA40DDE; Fri, 13 May 2022 05:27:34 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 92A1140698 for ; Fri, 13 May 2022 05:27:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652412452; x=1683948452; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=WwT0hPbtjSVpmeegz4Pyl2leFY/ZAFa9pL8hWFLdmZ0=; b=LZ0oh+RCsd9y99iq+XR9cmjP1NvGBscxlYKyTWrLeYytFYFhjTCy/9fW Oldp5NWe28bZAkJWpgLw7T8zUL6LmBXCZS0QZZv9tqFNG5IYPsoF+usqE kxM1Kl5Q+fOhppB+fRfWPumSIQuu5XOBtQBEf3jKidLgaRttG0xulw1hI eiBa+7F8B4HLVPf7lrVwOrpWi4rbWhMmzvwdtCgT6yAl262e83bUAD3mF GTdks1uP3OBZiVHOrEDDdvdnwr7P14fHs39NDNZPneNPNcKtwtxNM11tf PaJLVeDNlMrfanYaof0QB+YE32qKfiLS7jYXOzyAnRnlUQnzQLFn30Yx+ g==; X-IronPort-AV: E=McAfee;i="6400,9594,10345"; a="250103195" X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="250103195" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 20:27:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="658910776" Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83]) by FMSMGA003.fm.intel.com with ESMTP; 12 May 2022 20:27:24 -0700 Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Thu, 12 May 2022 20:27:23 -0700 Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27 via Frontend Transport; Thu, 12 May 2022 20:27:23 -0700 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.169) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2308.27; Thu, 12 May 2022 20:27:23 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SnuwqRousjes67lNnU6k3sthiC71Ec+rnQQIzKCisgx2cBbdOVObbHWNU2XmkiCZGi63X3MZXE7/JB+wVEsL+rpC1gzDAEhJLQfEcwAHv2CkhvOc1eDyB0gmt89BOTrxABhVsi+wXAouD/eK+o7qGYSuCLVqqd/y6lPgXZGM7+arbVrqzMkWyY/BBBxm4mrWxjEztsqqn+C7YEAOk8Alehw7+mPoZQarUGcJyo1H2jey+F9e8ahs9Q3utmZbeXf/mWYncdil4SnwtBoL21tZ3yx6gQ5KmY0wCM9cRjzRLoC0aPy9lkp+fnNqASW4k25QmKwxwzGRCw7ls6jKNkbLXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZixIHtMspCwoa/qWZmZSM8qV1ZXiKkA/EXGACj3u71E=; b=QF1yqHEW+EuDLpiU+vFVter6ZVPiqQ+6wJKWGNhlgAJAgjYgXvAayMHeLArgtM5+xmJx6A/892+GwI7fDiQKeNyR67p+cxHVX1oZJ5TNrDqRjCHDusRB+NasUVenmXDgW7s8p9xv0Duz9YrP+xYhhrbASZ/fAmJjSrh76TTKnlbOuYqqTIfL8GowWW0UU5CwDqxsOOMKTOgmtcDM/gaJQobmnvbTwmocUjDxiEQV6t0U4RmhhlWFsfRS6Wi3v7Dx1LEtDLmAgg6eelX1fCdZpCdZahSaucXuId8d5eNW/isebzcyXgEmb1nh6BBY5m1h3TydLca5l+TjITAhjTyxMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from SN6PR11MB3504.namprd11.prod.outlook.com (2603:10b6:805:d0::17) by CH0PR11MB5441.namprd11.prod.outlook.com (2603:10b6:610:d0::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5250.13; Fri, 13 May 2022 03:27:21 +0000 Received: from SN6PR11MB3504.namprd11.prod.outlook.com ([fe80::2c8f:42df:ed85:99e5]) by SN6PR11MB3504.namprd11.prod.outlook.com ([fe80::2c8f:42df:ed85:99e5%4]) with mapi id 15.20.5227.023; Fri, 13 May 2022 03:27:21 +0000 From: "Xia, Chenbo" To: "Ding, Xuan" , "maxime.coquelin@redhat.com" CC: "dev@dpdk.org" , "Hu, Jiayu" , "Jiang, Cheng1" , "Pai G, Sunil" , "liangma@liangbit.com" , "Ma, WenwuX" , "Wang, YuanX" Subject: RE: [PATCH v6 5/5] examples/vhost: support async dequeue data path Thread-Topic: [PATCH v6 5/5] examples/vhost: support async dequeue data path Thread-Index: AQHYZnT8JB4WxO39ikWPXTXydPqCSa0cJTKg Date: Fri, 13 May 2022 03:27:20 +0000 Message-ID: References: <20220407152546.38167-1-xuan.ding@intel.com> <20220513025058.12898-1-xuan.ding@intel.com> <20220513025058.12898-6-xuan.ding@intel.com> In-Reply-To: <20220513025058.12898-6-xuan.ding@intel.com> Accept-Language: en-US, zh-CN Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: b2aa0a9c-5fb5-45e0-712d-08da34907d68 x-ms-traffictypediagnostic: CH0PR11MB5441:EE_ x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: X+KYRdMDEQ8Btk+kyecMKDGVw9ZqZLz3h+S4CyVTpODpCkC9jpjSiVKXI6xq1s9ciFkqLu9v2y5reZnn4Nl5CLtB3433WQNq1zswYBYBI/N6AbhEiez+yb9FnYzccCrJz6KFXeIm4V5yAb+PsKmOKLCGecGF0dR0pHkZRUIlJDA72cErmr8imMsI30tzRKBHADKTo3ovf3sXncmBcRx2dussgV7gOgI0y9TvfRXZNrKeX0l5x1m75l1xngz8jjSZ5tjHbW7wQ3Lb4wpXUiHH1WF4orTDEwbeNhi4m6F/Lpaz8RqNKdwv/Mli4w1XPTwISyjsiOzlfGTAsvWI2P3Nw8Ea8oI+NuzI/MOx8QjNwxX8mTbVyZCpV2OQmZoz2p0B11hM7sVM/cbm+ySLGGjAMytvKy62xAZYympN85zFmLkoQhGQunaxtqX1NxhYUYWaznIaJpALC+6uSKCHvrybGFBdb70MMO4JPlc3C54rQvEUcbAiH0NQOe+fsU8Ugz3RuQo00d8kDB9xphYf6/TCrgB28HuZ/9c1sG3IoRy0AmnoxS89+xIi2Uu2FjN/YYrtft1Y8iz2ThHQmWdSQ96YpF1bGQqfrXuBGm/erecPfyxMp9fhbHQw7ux/QZSRdgIEkdJ7tXhEmf3JLNQ3DngMC4RRE2RrFqwhXqu6BRiGIH5RDKjShPFaMCyKi57SAgL0iRGUnCr0gUUvi4L3PLjL2w== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3504.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(366004)(86362001)(186003)(122000001)(54906003)(110136005)(83380400001)(82960400001)(33656002)(316002)(107886003)(38070700005)(38100700002)(508600001)(55016003)(71200400001)(7696005)(6506007)(53546011)(9686003)(26005)(52536014)(8936002)(5660300002)(30864003)(2906002)(76116006)(64756008)(66446008)(66556008)(66946007)(66476007)(4326008)(8676002)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?aWE7DJ79syXd8uFIDEUZhXQ0babzN4STlN9G+Qxti88A/nXQvub+3xXp4z7u?= =?us-ascii?Q?qaCM0QM+LIZum6v7yHdKYgUssvfPSqZrNhT5YZxsqYBSwiwwWi4QauHm6LTz?= =?us-ascii?Q?pXQL58nMobx86EIAA0X22LeVSVSxqBHiDi1kcYL+fNVoLr94Bp//AQwwWLwN?= =?us-ascii?Q?q7jca4U06OUFkNe5G36uNh6oUX1ZVzNJqIdn4Am7Cy/KFA1x7CZSVQDDrOZd?= =?us-ascii?Q?gOo5itB9oFQFB4uuYyTLHJfKEL1/slzK9GNOX7zM2Xel4ZuHrQbLa+cMZVqT?= =?us-ascii?Q?wWH3MJNhBSUZzZEZeqLjJ/+JcMXS1kvKF4WlM7id/Ep9/MxtxAnAgMwNmNt8?= =?us-ascii?Q?k10CLRwZ6GLg3fqwHp6zqcGzEKYlplT1gvEnDG/EYBBWEWZiy+cYN0K+vlmU?= =?us-ascii?Q?aNVxx/Mx9vsjeQyxRYPgOR7abZgwNfbo+F0TlmI+7l9F2Ql/MbDCuDrPdzKX?= =?us-ascii?Q?QYr3DZq0j9GZ8uO/A0n5PgHeRFwqZhDnh5D1/aFjwsiSMZ+Wf5T7LGkvwGSF?= =?us-ascii?Q?ILNc1eHUhn+cN3sM95+LyVxFKy+MiYsL9h0keGdnRuXzHsYzMRgrLBBL3QhK?= =?us-ascii?Q?QMItmnzF8dfWHOfG1yWwZJsMDFbkFzZAg7a0iMtKswHdswpKMl7meP/47vTC?= =?us-ascii?Q?r+aAlOUD1MMZaVUUAaZcows3GmeAoYGeOleeNRfyWKeVPSLYwdpnTW0jVECa?= =?us-ascii?Q?F8i14nQtr/2+Zy73U079ATY6Agmjp/giDMfL9iUFcW1Ik/phy19WrhYXzxKR?= =?us-ascii?Q?A1SWj6IN1pFpuYx/bqOUev/1/x6xCyuJPzqMviMoEuwKGfMkZDX14yFHSBc6?= =?us-ascii?Q?UGZbPbZRJMK9cKSOFpou5iitwSTmtvxWL/McRfZk76xrYDpl3hYQdZrnF9+l?= =?us-ascii?Q?ZYt+6Z5tsjvSzkZ6HYWM354GbNKzaPduDL/5bpASAzJsSLDY4l3GutegAYJc?= =?us-ascii?Q?9aOg1YrouVrZ+3h+1+z7N9I3G/Ze+DLENZQj6/gOJYpbj1SIzU5xP27HPU1R?= =?us-ascii?Q?En25yrK3BGJ/lB97lBntmmv+1F1yoicON9kuJUR1qqoXfEfpexuhtpPQnSYd?= =?us-ascii?Q?K9H0I4HlepCkad+eplkYri5WhIGT44GilfKixvk5sRLP08Zlt/la1X7RfYU6?= =?us-ascii?Q?FkmuZkXGuo5QidiIUXW7HtHfSePDVFY2yBWiI1j0gCq3Vg2Yjtm3xZ+kwT+O?= =?us-ascii?Q?RxPUR/QGD97CE+dQ9kb/zmfSRUQMAUV1qPUSLT6WxBQjdAlR7oPaJlo9ffF7?= =?us-ascii?Q?bqUAmM+VAl2QG+AjnzNgSW+lvpdrmMH9q+faWdIBa0Yy++5UTxPQv5of8fig?= =?us-ascii?Q?uVO8ibucHd6LbBdwgqOJ/Z5z1R2JY+Giz7pkBhN6RHBPYaXQ02lZTIbtH2ed?= =?us-ascii?Q?WjvngPabKzpnFwNbr6ek5Wy8+gbUYehoe26QdcdQss7J0Q0jvLu7dz3OdI2Q?= =?us-ascii?Q?26PkhQycLfApn7Z6y3+NdyVrUAPbyrDYp8A193xOYeo5lId+KTPDAL/kEQGV?= =?us-ascii?Q?7TmpA/wDXofJP3IATu40b+VUDqCMq7vi4OYyijIGXLsNwUPnvAmfnox106n8?= =?us-ascii?Q?RTna755qSOhoxdKGUNBpaZr2YqE0DZVlWTRBlmzriEVj21YiHsPEPnyafOyO?= =?us-ascii?Q?bMLtERNbvLvgum3E0L2SJtMOPTSq7ylLJ81Ek5sXB2DPWgaoLiu44FePIj2N?= =?us-ascii?Q?/aRmJwpesLBB/rqz7cyLZAI7lN9NunAKiCRGqsP1uevYIausZe4wXtjzE6Un?= =?us-ascii?Q?IC7612boEQ=3D=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3504.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: b2aa0a9c-5fb5-45e0-712d-08da34907d68 X-MS-Exchange-CrossTenant-originalarrivaltime: 13 May 2022 03:27:20.9653 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 9dOhd5M03OB7+pjaoXl8xAbyZj/MyzBBKH9Q2/UpE9POOcYzNPwlcOlQMotxYGUTMNeQ/d235h2PqFyOVp1A5A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR11MB5441 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Ding, Xuan > Sent: Friday, May 13, 2022 10:51 AM > To: maxime.coquelin@redhat.com; Xia, Chenbo > Cc: dev@dpdk.org; Hu, Jiayu ; Jiang, Cheng1 > ; Pai G, Sunil ; > liangma@liangbit.com; Ding, Xuan ; Ma, WenwuX > ; Wang, YuanX > Subject: [PATCH v6 5/5] examples/vhost: support async dequeue data path >=20 > From: Xuan Ding >=20 > This patch adds the use case for async dequeue API. Vswitch can > leverage DMA device to accelerate vhost async dequeue path. >=20 > Signed-off-by: Wenwu Ma > Signed-off-by: Yuan Wang > Signed-off-by: Xuan Ding > Tested-by: Yvonne Yang > Reviewed-by: Maxime Coquelin > --- > doc/guides/sample_app_ug/vhost.rst | 9 +- > examples/vhost/main.c | 284 ++++++++++++++++++++--------- > examples/vhost/main.h | 32 +++- > examples/vhost/virtio_net.c | 16 +- > 4 files changed, 243 insertions(+), 98 deletions(-) >=20 > diff --git a/doc/guides/sample_app_ug/vhost.rst > b/doc/guides/sample_app_ug/vhost.rst > index a6ce4bc8ac..09db965e70 100644 > --- a/doc/guides/sample_app_ug/vhost.rst > +++ b/doc/guides/sample_app_ug/vhost.rst > @@ -169,9 +169,12 @@ demonstrates how to use the async vhost APIs. It's > used in combination with dmas > **--dmas** > This parameter is used to specify the assigned DMA device of a vhost > device. > Async vhost-user net driver will be used if --dmas is set. For example > ---dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for > vhost > -device 0 enqueue operation and use DMA channel 00:04.1 for vhost device = 1 > -enqueue operation. > +--dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3] means use > +DMA channel 00:04.0/00:04.2 for vhost device 0 enqueue/dequeue operation > +and use DMA channel 00:04.1/00:04.3 for vhost device 1 enqueue/dequeue > +operation. The index of the device corresponds to the socket file in > order, > +that means vhost device 0 is created through the first socket file, vhos= t > +device 1 is created through the second socket file, and so on. >=20 > Common Issues > ------------- > diff --git a/examples/vhost/main.c b/examples/vhost/main.c > index c4d46de1c5..d070391727 100644 > --- a/examples/vhost/main.c > +++ b/examples/vhost/main.c > @@ -63,6 +63,9 @@ >=20 > #define DMA_RING_SIZE 4096 >=20 > +#define ASYNC_ENQUEUE_VHOST 1 > +#define ASYNC_DEQUEUE_VHOST 2 > + > /* number of mbufs in all pools - if specified on command-line. */ > static int total_num_mbufs =3D NUM_MBUFS_DEFAULT; >=20 > @@ -116,6 +119,8 @@ static uint32_t burst_rx_retry_num =3D BURST_RX_RETRI= ES; > static char *socket_files; > static int nb_sockets; >=20 > +static struct vhost_queue_ops vdev_queue_ops[RTE_MAX_VHOST_DEVICE]; > + > /* empty VMDq configuration structure. Filled in programmatically */ > static struct rte_eth_conf vmdq_conf_default =3D { > .rxmode =3D { > @@ -205,6 +210,18 @@ struct vhost_bufftable *vhost_txbuff[RTE_MAX_LCORE * > RTE_MAX_VHOST_DEVICE]; > #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \ > / US_PER_S * BURST_TX_DRAIN_US) >=20 > +static int vid2socketid[RTE_MAX_VHOST_DEVICE]; > + > +static uint32_t get_async_flag_by_socketid(int socketid) > +{ > + return dma_bind[socketid].async_flag; > +} > + > +static void init_vid2socketid_array(int vid, int socketid) > +{ > + vid2socketid[vid] =3D socketid; > +} Return value and func name should be on separate lines as per coding style. And above func can be inline, same suggestion for short func below, especia= lly ones in data path. Thanks, Chenbo > + > static inline bool > is_dma_configured(int16_t dev_id) > { > @@ -224,7 +241,7 @@ open_dma(const char *value) > char *addrs =3D input; > char *ptrs[2]; > char *start, *end, *substr; > - int64_t vid; > + int64_t socketid, vring_id; >=20 > struct rte_dma_info info; > struct rte_dma_conf dev_config =3D { .nb_vchans =3D 1 }; > @@ -262,7 +279,9 @@ open_dma(const char *value) >=20 > while (i < args_nr) { > char *arg_temp =3D dma_arg[i]; > + char *txd, *rxd; > uint8_t sub_nr; > + int async_flag; >=20 > sub_nr =3D rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, > '@'); > if (sub_nr !=3D 2) { > @@ -270,14 +289,23 @@ open_dma(const char *value) > goto out; > } >=20 > - start =3D strstr(ptrs[0], "txd"); > - if (start =3D=3D NULL) { > + txd =3D strstr(ptrs[0], "txd"); > + rxd =3D strstr(ptrs[0], "rxd"); > + if (txd) { > + start =3D txd; > + vring_id =3D VIRTIO_RXQ; > + async_flag =3D ASYNC_ENQUEUE_VHOST; > + } else if (rxd) { > + start =3D rxd; > + vring_id =3D VIRTIO_TXQ; > + async_flag =3D ASYNC_DEQUEUE_VHOST; > + } else { > ret =3D -1; > goto out; > } >=20 > start +=3D 3; > - vid =3D strtol(start, &end, 0); > + socketid =3D strtol(start, &end, 0); > if (end =3D=3D start) { > ret =3D -1; > goto out; > @@ -338,7 +366,8 @@ open_dma(const char *value) > dmas_id[dma_count++] =3D dev_id; >=20 > done: > - (dma_info + vid)->dmas[VIRTIO_RXQ].dev_id =3D dev_id; > + (dma_info + socketid)->dmas[vring_id].dev_id =3D dev_id; > + (dma_info + socketid)->async_flag |=3D async_flag; > i++; > } > out: > @@ -990,7 +1019,7 @@ complete_async_pkts(struct vhost_dev *vdev) > { > struct rte_mbuf *p_cpl[MAX_PKT_BURST]; > uint16_t complete_count; > - int16_t dma_id =3D dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; > + int16_t dma_id =3D dma_bind[vid2socketid[vdev- > >vid]].dmas[VIRTIO_RXQ].dev_id; >=20 > complete_count =3D rte_vhost_poll_enqueue_completed(vdev->vid, > VIRTIO_RXQ, p_cpl, MAX_PKT_BURST, dma_id, 0); > @@ -1029,22 +1058,7 @@ drain_vhost(struct vhost_dev *vdev) > uint16_t nr_xmit =3D vhost_txbuff[buff_idx]->len; > struct rte_mbuf **m =3D vhost_txbuff[buff_idx]->m_table; >=20 > - if (builtin_net_driver) { > - ret =3D vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit); > - } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) { > - uint16_t enqueue_fail =3D 0; > - int16_t dma_id =3D dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; > - > - complete_async_pkts(vdev); > - ret =3D rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m, > nr_xmit, dma_id, 0); > - > - enqueue_fail =3D nr_xmit - ret; > - if (enqueue_fail) > - free_pkts(&m[ret], nr_xmit - ret); > - } else { > - ret =3D rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, > - m, nr_xmit); > - } > + ret =3D vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, VIRTIO_RXQ, > m, nr_xmit); >=20 > if (enable_stats) { > __atomic_add_fetch(&vdev->stats.rx_total_atomic, nr_xmit, > @@ -1053,7 +1067,7 @@ drain_vhost(struct vhost_dev *vdev) > __ATOMIC_SEQ_CST); > } >=20 > - if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) > + if (!dma_bind[vid2socketid[vdev- > >vid]].dmas[VIRTIO_RXQ].async_enabled) > free_pkts(m, nr_xmit); > } >=20 > @@ -1325,6 +1339,32 @@ drain_mbuf_table(struct mbuf_table *tx_q) > } > } >=20 > +uint16_t > +async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t rx_count) > +{ > + uint16_t enqueue_count; > + uint16_t enqueue_fail =3D 0; > + uint16_t dma_id =3D dma_bind[vid2socketid[dev- > >vid]].dmas[VIRTIO_RXQ].dev_id; > + > + complete_async_pkts(dev); > + enqueue_count =3D rte_vhost_submit_enqueue_burst(dev->vid, queue_id, > + pkts, rx_count, dma_id, 0); > + > + enqueue_fail =3D rx_count - enqueue_count; > + if (enqueue_fail) > + free_pkts(&pkts[enqueue_count], enqueue_fail); > + > + return enqueue_count; > +} > + > +uint16_t > +sync_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t rx_count) > +{ > + return rte_vhost_enqueue_burst(dev->vid, queue_id, pkts, rx_count); > +} > + > static __rte_always_inline void > drain_eth_rx(struct vhost_dev *vdev) > { > @@ -1355,25 +1395,8 @@ drain_eth_rx(struct vhost_dev *vdev) > } > } >=20 > - if (builtin_net_driver) { > - enqueue_count =3D vs_enqueue_pkts(vdev, VIRTIO_RXQ, > - pkts, rx_count); > - } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) { > - uint16_t enqueue_fail =3D 0; > - int16_t dma_id =3D dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; > - > - complete_async_pkts(vdev); > - enqueue_count =3D rte_vhost_submit_enqueue_burst(vdev->vid, > - VIRTIO_RXQ, pkts, rx_count, dma_id, 0); > - > - enqueue_fail =3D rx_count - enqueue_count; > - if (enqueue_fail) > - free_pkts(&pkts[enqueue_count], enqueue_fail); > - > - } else { > - enqueue_count =3D rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, > - pkts, rx_count); > - } > + enqueue_count =3D vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, > + VIRTIO_RXQ, pkts, rx_count); >=20 > if (enable_stats) { > __atomic_add_fetch(&vdev->stats.rx_total_atomic, rx_count, > @@ -1382,10 +1405,31 @@ drain_eth_rx(struct vhost_dev *vdev) > __ATOMIC_SEQ_CST); > } >=20 > - if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) > + if (!dma_bind[vid2socketid[vdev- > >vid]].dmas[VIRTIO_RXQ].async_enabled) > free_pkts(pkts, rx_count); > } >=20 > +uint16_t async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mempool *mbuf_pool, > + struct rte_mbuf **pkts, uint16_t count) > +{ > + int nr_inflight; > + uint16_t dequeue_count; > + uint16_t dma_id =3D dma_bind[vid2socketid[dev- > >vid]].dmas[VIRTIO_TXQ].dev_id; > + > + dequeue_count =3D rte_vhost_async_try_dequeue_burst(dev->vid, queue_id, > + mbuf_pool, pkts, count, &nr_inflight, dma_id, 0); > + > + return dequeue_count; > +} > + > +uint16_t sync_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mempool *mbuf_pool, > + struct rte_mbuf **pkts, uint16_t count) > +{ > + return rte_vhost_dequeue_burst(dev->vid, queue_id, mbuf_pool, pkts, > count); > +} > + > static __rte_always_inline void > drain_virtio_tx(struct vhost_dev *vdev) > { > @@ -1393,13 +1437,8 @@ drain_virtio_tx(struct vhost_dev *vdev) > uint16_t count; > uint16_t i; >=20 > - if (builtin_net_driver) { > - count =3D vs_dequeue_pkts(vdev, VIRTIO_TXQ, mbuf_pool, > - pkts, MAX_PKT_BURST); > - } else { > - count =3D rte_vhost_dequeue_burst(vdev->vid, VIRTIO_TXQ, > - mbuf_pool, pkts, MAX_PKT_BURST); > - } > + count =3D vdev_queue_ops[vdev->vid].dequeue_pkt_burst(vdev, > + VIRTIO_TXQ, mbuf_pool, pkts, MAX_PKT_BURST); >=20 > /* setup VMDq for the first packet */ > if (unlikely(vdev->ready =3D=3D DEVICE_MAC_LEARNING) && count) { > @@ -1478,6 +1517,26 @@ switch_worker(void *arg __rte_unused) > return 0; > } >=20 > +static void > +vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_i= d) > +{ > + uint16_t n_pkt =3D 0; > + int pkts_inflight; > + > + int16_t dma_id =3D dma_bind[vid2socketid[vdev- > >vid]].dmas[queue_id].dev_id; > + pkts_inflight =3D rte_vhost_async_get_inflight_thread_unsafe(vdev->vid, > queue_id); > + > + struct rte_mbuf *m_cpl[pkts_inflight]; > + > + while (pkts_inflight) { > + n_pkt =3D rte_vhost_clear_queue_thread_unsafe(vdev->vid, > queue_id, m_cpl, > + pkts_inflight, dma_id, 0); > + free_pkts(m_cpl, n_pkt); > + pkts_inflight =3D > rte_vhost_async_get_inflight_thread_unsafe(vdev->vid, > + queue_id); > + } > +} > + > /* > * Remove a device from the specific data core linked list and from the > * main linked list. Synchronization occurs through the use of the > @@ -1535,27 +1594,79 @@ destroy_device(int vid) > vdev->vid); >=20 > if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { > - uint16_t n_pkt =3D 0; > - int pkts_inflight; > - int16_t dma_id =3D dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; > - pkts_inflight =3D rte_vhost_async_get_inflight_thread_unsafe(vid, > VIRTIO_RXQ); > - struct rte_mbuf *m_cpl[pkts_inflight]; > - > - while (pkts_inflight) { > - n_pkt =3D rte_vhost_clear_queue_thread_unsafe(vid, > VIRTIO_RXQ, > - m_cpl, pkts_inflight, dma_id, 0); > - free_pkts(m_cpl, n_pkt); > - pkts_inflight =3D > rte_vhost_async_get_inflight_thread_unsafe(vid, > - VIRTIO_RXQ); > - } > - > + vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); > dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =3D false; > } >=20 > + if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { > + vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); > + rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); > + dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled =3D false; > + } > + > rte_free(vdev); > } >=20 > +static int > +get_socketid_by_vid(int vid) > +{ > + int i; > + char ifname[PATH_MAX]; > + rte_vhost_get_ifname(vid, ifname, sizeof(ifname)); > + > + for (i =3D 0; i < nb_sockets; i++) { > + char *file =3D socket_files + i * PATH_MAX; > + if (strcmp(file, ifname) =3D=3D 0) > + return i; > + } > + > + return -1; > +} > + > +static int > +init_vhost_queue_ops(int vid) > +{ > + if (builtin_net_driver) { > + vdev_queue_ops[vid].enqueue_pkt_burst =3D builtin_enqueue_pkts; > + vdev_queue_ops[vid].dequeue_pkt_burst =3D builtin_dequeue_pkts; > + } else { > + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled) > + vdev_queue_ops[vid].enqueue_pkt_burst =3D > async_enqueue_pkts; > + else > + vdev_queue_ops[vid].enqueue_pkt_burst =3D > sync_enqueue_pkts; > + > + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled) > + vdev_queue_ops[vid].dequeue_pkt_burst =3D > async_dequeue_pkts; > + else > + vdev_queue_ops[vid].dequeue_pkt_burst =3D > sync_dequeue_pkts; > + } > + > + return 0; > +} > + > +static int > +vhost_async_channel_register(int vid) > +{ > + int rx_ret =3D 0, tx_ret =3D 0; > + > + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id !=3D > INVALID_DMA_ID) { > + rx_ret =3D rte_vhost_async_channel_register(vid, VIRTIO_RXQ); > + if (rx_ret =3D=3D 0) > + > dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled =3D true; > + } > + > + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id !=3D > INVALID_DMA_ID) { > + tx_ret =3D rte_vhost_async_channel_register(vid, VIRTIO_TXQ); > + if (tx_ret =3D=3D 0) > + > dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled =3D true; > + } > + > + return rx_ret | tx_ret; > +} > + > + > + > /* > * A new device is added to a data core. First the device is added to th= e > main linked list > * and then allocated to a specific data core. > @@ -1567,6 +1678,8 @@ new_device(int vid) > uint16_t i; > uint32_t device_num_min =3D num_devices; > struct vhost_dev *vdev; > + int ret; > + > vdev =3D rte_zmalloc("vhost device", sizeof(*vdev), > RTE_CACHE_LINE_SIZE); > if (vdev =3D=3D NULL) { > RTE_LOG(INFO, VHOST_DATA, > @@ -1589,6 +1702,17 @@ new_device(int vid) > } > } >=20 > + int socketid =3D get_socketid_by_vid(vid); > + if (socketid =3D=3D -1) > + return -1; > + > + init_vid2socketid_array(vid, socketid); > + > + ret =3D vhost_async_channel_register(vid); > + > + if (init_vhost_queue_ops(vid) !=3D 0) > + return -1; > + > if (builtin_net_driver) > vs_vhost_net_setup(vdev); >=20 > @@ -1620,16 +1744,7 @@ new_device(int vid) > "(%d) device has been added to data core %d\n", > vid, vdev->coreid); >=20 > - if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id !=3D INVALID_DMA_ID) { > - int ret; > - > - ret =3D rte_vhost_async_channel_register(vid, VIRTIO_RXQ); > - if (ret =3D=3D 0) > - dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =3D true; > - return ret; > - } > - > - return 0; > + return ret; > } >=20 > static int > @@ -1647,22 +1762,9 @@ vring_state_changed(int vid, uint16_t queue_id, in= t > enable) > if (queue_id !=3D VIRTIO_RXQ) > return 0; >=20 > - if (dma_bind[vid].dmas[queue_id].async_enabled) { > - if (!enable) { > - uint16_t n_pkt =3D 0; > - int pkts_inflight; > - pkts_inflight =3D > rte_vhost_async_get_inflight_thread_unsafe(vid, queue_id); > - int16_t dma_id =3D dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; > - struct rte_mbuf *m_cpl[pkts_inflight]; > - > - while (pkts_inflight) { > - n_pkt =3D rte_vhost_clear_queue_thread_unsafe(vid, > queue_id, > - m_cpl, pkts_inflight, dma_id, 0); > - free_pkts(m_cpl, n_pkt); > - pkts_inflight =3D > rte_vhost_async_get_inflight_thread_unsafe(vid, > - > queue_id); > - } > - } > + if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { > + if (!enable) > + vhost_clear_queue_thread_unsafe(vdev, queue_id); > } >=20 > return 0; > @@ -1887,7 +1989,7 @@ main(int argc, char *argv[]) > for (i =3D 0; i < nb_sockets; i++) { > char *file =3D socket_files + i * PATH_MAX; >=20 > - if (dma_count) > + if (dma_count && get_async_flag_by_socketid(i) !=3D 0) > flags =3D flags | RTE_VHOST_USER_ASYNC_COPY; >=20 > ret =3D rte_vhost_driver_register(file, flags); > diff --git a/examples/vhost/main.h b/examples/vhost/main.h > index e7f395c3c9..2fcb8376c5 100644 > --- a/examples/vhost/main.h > +++ b/examples/vhost/main.h > @@ -61,6 +61,19 @@ struct vhost_dev { > struct vhost_queue queues[MAX_QUEUE_PAIRS * 2]; > } __rte_cache_aligned; >=20 > +typedef uint16_t (*vhost_enqueue_burst_t)(struct vhost_dev *dev, > + uint16_t queue_id, struct rte_mbuf **pkts, > + uint32_t count); > + > +typedef uint16_t (*vhost_dequeue_burst_t)(struct vhost_dev *dev, > + uint16_t queue_id, struct rte_mempool *mbuf_pool, > + struct rte_mbuf **pkts, uint16_t count); > + > +struct vhost_queue_ops { > + vhost_enqueue_burst_t enqueue_pkt_burst; > + vhost_dequeue_burst_t dequeue_pkt_burst; > +}; > + > TAILQ_HEAD(vhost_dev_tailq_list, vhost_dev); >=20 >=20 > @@ -87,6 +100,7 @@ struct dma_info { >=20 > struct dma_for_vhost { > struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2]; > + uint32_t async_flag; > }; >=20 > /* we implement non-extra virtio net features */ > @@ -97,7 +111,19 @@ void vs_vhost_net_remove(struct vhost_dev *dev); > uint16_t vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > struct rte_mbuf **pkts, uint32_t count); >=20 > -uint16_t vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > - struct rte_mempool *mbuf_pool, > - struct rte_mbuf **pkts, uint16_t count); > +uint16_t builtin_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t count); > +uint16_t builtin_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mempool *mbuf_pool, > + struct rte_mbuf **pkts, uint16_t count); > +uint16_t sync_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t count); > +uint16_t sync_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mempool *mbuf_pool, > + struct rte_mbuf **pkts, uint16_t count); > +uint16_t async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t count); > +uint16_t async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mempool *mbuf_pool, > + struct rte_mbuf **pkts, uint16_t count); > #endif /* _MAIN_H_ */ > diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c > index 9064fc3a82..2432a96566 100644 > --- a/examples/vhost/virtio_net.c > +++ b/examples/vhost/virtio_net.c > @@ -238,6 +238,13 @@ vs_enqueue_pkts(struct vhost_dev *dev, uint16_t > queue_id, > return count; > } >=20 > +uint16_t > +builtin_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mbuf **pkts, uint32_t count) > +{ > + return vs_enqueue_pkts(dev, queue_id, pkts, count); > +} > + > static __rte_always_inline int > dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, > struct rte_mbuf *m, uint16_t desc_idx, > @@ -363,7 +370,7 @@ dequeue_pkt(struct vhost_dev *dev, struct > rte_vhost_vring *vr, > return 0; > } >=20 > -uint16_t > +static uint16_t > vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) > { > @@ -440,3 +447,10 @@ vs_dequeue_pkts(struct vhost_dev *dev, uint16_t > queue_id, >=20 > return i; > } > + > +uint16_t > +builtin_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) > +{ > + return vs_dequeue_pkts(dev, queue_id, mbuf_pool, pkts, count); > +} > -- > 2.17.1