From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 936F8A00C3; Fri, 13 May 2022 05:51:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36DA840DDE; Fri, 13 May 2022 05:51:53 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 1BBA640698 for ; Fri, 13 May 2022 05:51:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652413911; x=1683949911; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=3krToFEjdtB3LAhXNmoc0YTTaM2Pm1CGOAtxq+EedxI=; b=nst/UDGXqVavlte7E8Zvf1y1Ey5CKLnAmeLWVuJJ20VsQgwWoqMjnS5S USAGAXmnkJOhiW3Z75pef1a8Eq8oKEABFC3UI84W2nOH+S30hp+lH7kPS xIaX63imOXANzT9DtMlWHcIICog3JQsgPGai4P4TeGM4pORJ9HNvDG3Si XzN38vdSMzJt2HoBiVO+s283cQBuHOoLtoBdNd9UFUQqKuiGhI10AOx4Y 0ZWw2y9xrKG3cQC01fmhLCRzZpAXd5ZFVREt9D6T8Rm+KPpkBwG4KzP1P aDwPkMZkMFPsnfdtstVLUIhli1Gdh3tG8FGqOm+hSW88VEuSAA/exZW4s g==; X-IronPort-AV: E=McAfee;i="6400,9594,10345"; a="269876919" X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="269876919" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 May 2022 20:51:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,221,1647327600"; d="scan'208";a="624695115" Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82]) by fmsmga008.fm.intel.com with ESMTP; 12 May 2022 20:51:49 -0700 Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27; Thu, 12 May 2022 20:51:49 -0700 Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.27 via Frontend Transport; Thu, 12 May 2022 20:51:49 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.107) by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2308.27; Thu, 12 May 2022 20:51:49 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n5EAFx0JkRWuGbYAiokzDFuKH/DsbX6tk1yQRLE2GS7jVG6j/Ys++y2nD8zI9Knp5RAmVsm2ZlGvwTh7TzHzLCGmZo9sQTr52qA/jz8fiie6yzJ8b1LfOpnK0un7u0OX0da4fAMQL0r2Tlp3uJn94GggFFVon1X8LKzDEp2ZnytKEItpuk+jP7Jt3epK5lONvFoAxm7t9Fcw5grPMpHvxrKEFOswjZCqLOro8ky7G1MNIAijbbVWEx/0CDtMSrzIuvXkgk6OkXbO1SJy16zhUeXWH2+I411Op+1JcIx0dODSbd7fUqZ/KP72ozxaxIvOtXMUT0SoUmww5z6T/k+0Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DUHvpVnME45fgSoYh5NP7n64AZni6V7GIKsyVD2jnSs=; b=e814xstSRcLmp2nnTpPtIHdlDqI4NuAD4qqEPfbV99AXUR04i5380hYQBvYIy+eh1qGEN6tha/xyVYacdRsJLstQQzuNTbXFFnBgJLn2JfhN8rGTnJYRCg7KpFi71S6IN4Hb5e7LxtnMVUzw7mH8Y+6UBrmXOTiJNagQ17jufKCcd9G69KuWUTjYUCzeL781Yzjju5nwMBm3bMU3ZzRWUqBeDSFFgomlXqSgBhWwiyeBBje6DqRGOt9kyuO8OvVmhkYzEfGCuqqUR6raZNSJtTzTI4GCU2omIhvO8PGcODSWMQ4/6wMqKmNtaQ+ulpciep/z/6cZu0XkxDL2GSYMlA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from BN9PR11MB5513.namprd11.prod.outlook.com (2603:10b6:408:102::11) by DM4PR11MB6526.namprd11.prod.outlook.com (2603:10b6:8:8d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5227.22; Fri, 13 May 2022 03:51:47 +0000 Received: from BN9PR11MB5513.namprd11.prod.outlook.com ([fe80::31c6:2ec8:2f71:42da]) by BN9PR11MB5513.namprd11.prod.outlook.com ([fe80::31c6:2ec8:2f71:42da%2]) with mapi id 15.20.5250.014; Fri, 13 May 2022 03:51:47 +0000 From: "Ding, Xuan" To: "Xia, Chenbo" , "maxime.coquelin@redhat.com" CC: "dev@dpdk.org" , "Hu, Jiayu" , "Jiang, Cheng1" , "Pai G, Sunil" , "liangma@liangbit.com" , "Ma, WenwuX" , "Wang, YuanX" Subject: RE: [PATCH v6 5/5] examples/vhost: support async dequeue data path Thread-Topic: [PATCH v6 5/5] examples/vhost: support async dequeue data path Thread-Index: AQHYZnT87k4reuL5mkefdb2XrI3ylK0cJcoAgAAGo0A= Date: Fri, 13 May 2022 03:51:47 +0000 Message-ID: References: <20220407152546.38167-1-xuan.ding@intel.com> <20220513025058.12898-1-xuan.ding@intel.com> <20220513025058.12898-6-xuan.ding@intel.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.401.20 authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 6e84307e-f123-4714-4b77-08da3493e750 x-ms-traffictypediagnostic: DM4PR11MB6526:EE_ x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: r/OTqSdwrtETSUo6kS9DrRt9jKs6rKaDRtFdIhi8VT+WyUb3HZM5jFbOi+V97Nehubeoo0U3gePiq/CybJ2g6UVii6rBWyeJPEqzJ1XBMKI/TPFOrmPlGGb6+aw/4Dn4JIQH/G3fIscRgc8GckJHcb7rmzZ54g028WMQJl2E+eKcNNmb3o2cyO8WpEju9IuaaE3554mwQOjsryXZpigUkHHnaEgcKn4joz0XJ4gCIs2nvQTyUTBUpJM16JSQLIUCXLOLBk10fUwFuotIgrKnfWo6NMzqc9rhdHz02Kp1MKdaG1lj26BweVY8VlEv1VB5HlOxVo6P9XmXsecFI86bIlXuOhD79jNFmKi3Eokrtuc6Bm/L1WAnwaa6DHYina2q6RfcTIK1PvEDRLOAtA3E/W9CKwthGFqwpSmN80ck0MdwvSiEuqoRF1/3nM8WHe3rBlDo87ggUtZWKrN84YcacfM0cLQtvyo25tL2kQr//5rsQ0nG+Ht7pA7YDnQtP5eN5Gy9rz9Ino7jn4NSmK10Q6/aT7htq9Oi9Z2oqaXMGPSEdKJMHC40Ze5VxBrRWRxhxD1Qe5iYoSVcjruAcihdA/ohY66ZcYFnVPNCRI7crHP1kRbCs8Kk+Y7lm6o2VRLIrT3noQp81ThFh0Wosi0LQlq0e3koUER4pW0Owt+7qeJn96F5MKLMphD8PXrFn80HsybzmZ14ceNrfxTm9AE/HQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BN9PR11MB5513.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(366004)(5660300002)(508600001)(33656002)(8936002)(52536014)(83380400001)(186003)(71200400001)(86362001)(53546011)(6506007)(107886003)(38100700002)(30864003)(9686003)(26005)(55016003)(7696005)(2906002)(122000001)(316002)(82960400001)(110136005)(54906003)(4326008)(64756008)(38070700005)(66556008)(76116006)(66446008)(66946007)(66476007)(8676002)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?wASc9wT8b408kJEEv0i4OVKjU/quorlxb6OwX4qLQPT0vKZhLZEuZmi7tWGX?= =?us-ascii?Q?txMrHAzTW4k1RL4qqRniolTJPHEdSlr0jlMwcMNeKaT0GFNG6RN4l6S9yQyO?= =?us-ascii?Q?ZGM9brD2NCy3lkNNvbVtQIc4ugZfCXSQbhajzItlPzu4aWxRIkxDiEeoZi42?= =?us-ascii?Q?UaAXjOxlqigRz8D+Bqf7nkVH0AnYUUNbh4/UxY2yNHJN2WStVnVoGb6yLhCG?= =?us-ascii?Q?e1o5UWfZWVWotqj+Kg+yYSq7PsgiVzcRSjjrjuYgQDmnIJKvmVbCOGtpwzgG?= =?us-ascii?Q?9lYjxPYWrLlrabsGqqrXeTxCnthtOLs6ua3hp38rj6SwslEUsKi6+00FFjhS?= =?us-ascii?Q?fuOVsbTYbJEUr4u102wqhJ7l4ZXa8kdu18kyvKnxfgcFSclFrcNSPnmpLPfY?= =?us-ascii?Q?ZbSY/NClXKnNe6qrAOJnoQJySLaVk3Un86tUIpllcv+L1LyOCERnFS3ytWuP?= =?us-ascii?Q?/foQv8Md/NDfRTZ/0DIt8ewP/09WuFGA8AoZZv47JQYDqJm0W1DtfGowZF3d?= =?us-ascii?Q?kvOtQ6WCrdji4k/oGhM1+J6zNHA5RfuAsUzXVktGJORzDCnV87Fb5pOYOBNb?= =?us-ascii?Q?8JiRW42E8RcTOt+yCfKdsXw1gVbhvrALe9EFXxT/cCaCz3vQUuw3h0qTIg2M?= =?us-ascii?Q?awzXKrfqGqGLHpWSMqc7UWko7rgiPyiX8ode41LEFNvMDn9EuqyqmPqo5FrU?= =?us-ascii?Q?qhaDIg8TBZMtxgQDgUul7f4LO1YyMxNWGYFlNvXK7LjAj6S7rlEYQYVxvRcK?= =?us-ascii?Q?j2M6u+WWNWRCoAXzPzCLpBkopOC/Wj71dBzdXbX9yngW8simYTIK/BKbS7Mi?= =?us-ascii?Q?02QbYJyecG2xYZ2//nwGlu47DAe9VFtIYip7Rbl8BlQcEMQkpthO+l1cp5Ar?= =?us-ascii?Q?L/mjbyB6yjRcaGnF6eYqLev9EWttRZneKkB0rpmMRovtSelEWkJlsNrt1CIh?= =?us-ascii?Q?Rtq+RSJnMvG0sTDffzIiOe6F+mF15CiS3mG0NWIDgqM2w1Qz5+MV3SW3JRym?= =?us-ascii?Q?jM//ig7Zbmu1Q9hEmh5EnoXQvgW6NNvRTAfCLz6yTdwaxAwPtzdOYb/900it?= =?us-ascii?Q?O8Sp5bzFVuqCCch7lDWBtHGPobiFjn+hdyeuyGjN7KrkkBpwJyZpPUICTYsY?= =?us-ascii?Q?gxhU7qtYa/BGEDqrDyZpBdN+KtEo8cZvSgR+YMqWKbowTpPOh1ji3ebeMrof?= =?us-ascii?Q?Ov4AVWX0lLn4VC8QE/pKJMAY93WxqGga+V285afWmK4W8m6UZiLgfg9KBLG0?= =?us-ascii?Q?14Ag8+6enb3mIC1AkHSdtv9gR7UiML8pPLDevC9WibBwgZdmw8u7LdQe5n+r?= =?us-ascii?Q?ACb8umHsPJPSTiRpYZYPaSES979Rfq579ih+1DOATOLmFPA2Uw/gjqaD/tIj?= =?us-ascii?Q?p+GOl1WaAB844dfl6v4EQJTGYvKgeC9zfAyA3GWfmjI/DU0zXx36ong/IYLL?= =?us-ascii?Q?hLuFg0au6bMNYviDKKZgaCuT8m7OmGnxN/YCKwrtxrQUH+8Mld1jxCYZjzri?= =?us-ascii?Q?Z2wJFs43QispYeJMLMMzLUQngwFsDiHrFxe6BMVlrMv+6QBWSHjarqNHnXXk?= =?us-ascii?Q?4VWbwyMWI5a3KY6ImN/10F0qOgoykXqtxHM6LT62mhwwZZcu2O7+87+fzYzQ?= =?us-ascii?Q?jdsxcFQ5OiZKScucc2ZVfvgtXmZcufNz5CIf15xuRRPvI0FVp1muHvUGwHWR?= =?us-ascii?Q?D1NF9ABi2LgLxarxgOtpUW4K0cY31yuLYOLkqWD4FU4br1oB8TrW0vlAZQop?= =?us-ascii?Q?D34/qRTSkg=3D=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: BN9PR11MB5513.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6e84307e-f123-4714-4b77-08da3493e750 X-MS-Exchange-CrossTenant-originalarrivaltime: 13 May 2022 03:51:47.1358 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 7JA6cNZJoecPq2PQlDDi6IX/JbPy0zkxHarbz07UVmEfWzrweSxKt1eXIpzESbJu/Gc+ftCFMTtfZQmIAcv6fA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR11MB6526 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Xia, Chenbo > Sent: Friday, May 13, 2022 11:27 AM > To: Ding, Xuan ; maxime.coquelin@redhat.com > Cc: dev@dpdk.org; Hu, Jiayu ; Jiang, Cheng1 > ; Pai G, Sunil ; > liangma@liangbit.com; Ma, WenwuX ; Wang, > YuanX > Subject: RE: [PATCH v6 5/5] examples/vhost: support async dequeue data > path >=20 > > -----Original Message----- > > From: Ding, Xuan > > Sent: Friday, May 13, 2022 10:51 AM > > To: maxime.coquelin@redhat.com; Xia, Chenbo > > Cc: dev@dpdk.org; Hu, Jiayu ; Jiang, Cheng1 > > ; Pai G, Sunil ; > > liangma@liangbit.com; Ding, Xuan ; Ma, WenwuX > > ; Wang, YuanX > > Subject: [PATCH v6 5/5] examples/vhost: support async dequeue data > > path > > > > From: Xuan Ding > > > > This patch adds the use case for async dequeue API. Vswitch can > > leverage DMA device to accelerate vhost async dequeue path. > > > > Signed-off-by: Wenwu Ma > > Signed-off-by: Yuan Wang > > Signed-off-by: Xuan Ding > > Tested-by: Yvonne Yang > > Reviewed-by: Maxime Coquelin > > --- > > doc/guides/sample_app_ug/vhost.rst | 9 +- > > examples/vhost/main.c | 284 ++++++++++++++++++++--------- > > examples/vhost/main.h | 32 +++- > > examples/vhost/virtio_net.c | 16 +- > > 4 files changed, 243 insertions(+), 98 deletions(-) > > > > diff --git a/doc/guides/sample_app_ug/vhost.rst > > b/doc/guides/sample_app_ug/vhost.rst > > index a6ce4bc8ac..09db965e70 100644 > > --- a/doc/guides/sample_app_ug/vhost.rst > > +++ b/doc/guides/sample_app_ug/vhost.rst > > @@ -169,9 +169,12 @@ demonstrates how to use the async vhost APIs. > > It's used in combination with dmas > > **--dmas** > > This parameter is used to specify the assigned DMA device of a vhost > > device. > > Async vhost-user net driver will be used if --dmas is set. For > > example ---dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel > > 00:04.0 for vhost -device 0 enqueue operation and use DMA channel > > 00:04.1 for vhost device 1 -enqueue operation. > > +--dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3] means > > +use DMA channel 00:04.0/00:04.2 for vhost device 0 enqueue/dequeue > > +operation and use DMA channel 00:04.1/00:04.3 for vhost device 1 > > +enqueue/dequeue operation. The index of the device corresponds to the > > +socket file in > > order, > > +that means vhost device 0 is created through the first socket file, > > +vhost device 1 is created through the second socket file, and so on. > > > > Common Issues > > ------------- > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c index > > c4d46de1c5..d070391727 100644 > > --- a/examples/vhost/main.c > > +++ b/examples/vhost/main.c > > @@ -63,6 +63,9 @@ > > > > #define DMA_RING_SIZE 4096 > > > > +#define ASYNC_ENQUEUE_VHOST 1 > > +#define ASYNC_DEQUEUE_VHOST 2 > > + > > /* number of mbufs in all pools - if specified on command-line. */ > > static int total_num_mbufs =3D NUM_MBUFS_DEFAULT; > > > > @@ -116,6 +119,8 @@ static uint32_t burst_rx_retry_num =3D > > BURST_RX_RETRIES; static char *socket_files; static int nb_sockets; > > > > +static struct vhost_queue_ops > vdev_queue_ops[RTE_MAX_VHOST_DEVICE]; > > + > > /* empty VMDq configuration structure. Filled in programmatically */ > > static struct rte_eth_conf vmdq_conf_default =3D { > > .rxmode =3D { > > @@ -205,6 +210,18 @@ struct vhost_bufftable > > *vhost_txbuff[RTE_MAX_LCORE * RTE_MAX_VHOST_DEVICE]; > > #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \ > > / US_PER_S * BURST_TX_DRAIN_US) > > > > +static int vid2socketid[RTE_MAX_VHOST_DEVICE]; > > + > > +static uint32_t get_async_flag_by_socketid(int socketid) { > > + return dma_bind[socketid].async_flag; } > > + > > +static void init_vid2socketid_array(int vid, int socketid) { > > + vid2socketid[vid] =3D socketid; > > +} >=20 > Return value and func name should be on separate lines as per coding styl= e. > And above func can be inline, same suggestion for short func below, > especially ones in data path. Thanks Chenbo, will fix it in next version. Regards, Xuan >=20 > Thanks, > Chenbo >=20 > > + > > static inline bool > > is_dma_configured(int16_t dev_id) > > { > > @@ -224,7 +241,7 @@ open_dma(const char *value) > > char *addrs =3D input; > > char *ptrs[2]; > > char *start, *end, *substr; > > - int64_t vid; > > + int64_t socketid, vring_id; > > > > struct rte_dma_info info; > > struct rte_dma_conf dev_config =3D { .nb_vchans =3D 1 }; @@ -262,7 > > +279,9 @@ open_dma(const char *value) > > > > while (i < args_nr) { > > char *arg_temp =3D dma_arg[i]; > > + char *txd, *rxd; > > uint8_t sub_nr; > > + int async_flag; > > > > sub_nr =3D rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@'); > > if (sub_nr !=3D 2) { > > @@ -270,14 +289,23 @@ open_dma(const char *value) > > goto out; > > } > > > > - start =3D strstr(ptrs[0], "txd"); > > - if (start =3D=3D NULL) { > > + txd =3D strstr(ptrs[0], "txd"); > > + rxd =3D strstr(ptrs[0], "rxd"); > > + if (txd) { > > + start =3D txd; > > + vring_id =3D VIRTIO_RXQ; > > + async_flag =3D ASYNC_ENQUEUE_VHOST; > > + } else if (rxd) { > > + start =3D rxd; > > + vring_id =3D VIRTIO_TXQ; > > + async_flag =3D ASYNC_DEQUEUE_VHOST; > > + } else { > > ret =3D -1; > > goto out; > > } > > > > start +=3D 3; > > - vid =3D strtol(start, &end, 0); > > + socketid =3D strtol(start, &end, 0); > > if (end =3D=3D start) { > > ret =3D -1; > > goto out; > > @@ -338,7 +366,8 @@ open_dma(const char *value) > > dmas_id[dma_count++] =3D dev_id; > > > > done: > > - (dma_info + vid)->dmas[VIRTIO_RXQ].dev_id =3D dev_id; > > + (dma_info + socketid)->dmas[vring_id].dev_id =3D dev_id; > > + (dma_info + socketid)->async_flag |=3D async_flag; > > i++; > > } > > out: > > @@ -990,7 +1019,7 @@ complete_async_pkts(struct vhost_dev *vdev) { > > struct rte_mbuf *p_cpl[MAX_PKT_BURST]; > > uint16_t complete_count; > > - int16_t dma_id =3D dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id; > > + int16_t dma_id =3D dma_bind[vid2socketid[vdev- > > >vid]].dmas[VIRTIO_RXQ].dev_id; > > > > complete_count =3D rte_vhost_poll_enqueue_completed(vdev->vid, > > VIRTIO_RXQ, p_cpl, MAX_PKT_BURST, > dma_id, 0); @@ -1029,22 > > +1058,7 @@ drain_vhost(struct vhost_dev *vdev) > > uint16_t nr_xmit =3D vhost_txbuff[buff_idx]->len; > > struct rte_mbuf **m =3D vhost_txbuff[buff_idx]->m_table; > > > > - if (builtin_net_driver) { > > - ret =3D vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit); > > - } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) { > > - uint16_t enqueue_fail =3D 0; > > - int16_t dma_id =3D dma_bind[vdev- > >vid].dmas[VIRTIO_RXQ].dev_id; > > - > > - complete_async_pkts(vdev); > > - ret =3D rte_vhost_submit_enqueue_burst(vdev->vid, > VIRTIO_RXQ, m, > > nr_xmit, dma_id, 0); > > - > > - enqueue_fail =3D nr_xmit - ret; > > - if (enqueue_fail) > > - free_pkts(&m[ret], nr_xmit - ret); > > - } else { > > - ret =3D rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, > > - m, nr_xmit); > > - } > > + ret =3D vdev_queue_ops[vdev->vid].enqueue_pkt_burst(vdev, > VIRTIO_RXQ, > > m, nr_xmit); > > > > if (enable_stats) { > > __atomic_add_fetch(&vdev->stats.rx_total_atomic, nr_xmit, > @@ > > -1053,7 +1067,7 @@ drain_vhost(struct vhost_dev *vdev) > > __ATOMIC_SEQ_CST); > > } > > > > - if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) > > + if (!dma_bind[vid2socketid[vdev- > > >vid]].dmas[VIRTIO_RXQ].async_enabled) > > free_pkts(m, nr_xmit); > > } > > > > @@ -1325,6 +1339,32 @@ drain_mbuf_table(struct mbuf_table *tx_q) > > } > > } > > > > +uint16_t > > +async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mbuf **pkts, uint32_t rx_count) { > > + uint16_t enqueue_count; > > + uint16_t enqueue_fail =3D 0; > > + uint16_t dma_id =3D dma_bind[vid2socketid[dev- > > >vid]].dmas[VIRTIO_RXQ].dev_id; > > + > > + complete_async_pkts(dev); > > + enqueue_count =3D rte_vhost_submit_enqueue_burst(dev->vid, > queue_id, > > + pkts, rx_count, dma_id, 0); > > + > > + enqueue_fail =3D rx_count - enqueue_count; > > + if (enqueue_fail) > > + free_pkts(&pkts[enqueue_count], enqueue_fail); > > + > > + return enqueue_count; > > +} > > + > > +uint16_t > > +sync_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mbuf **pkts, uint32_t rx_count) { > > + return rte_vhost_enqueue_burst(dev->vid, queue_id, pkts, rx_count); > > +} > > + > > static __rte_always_inline void > > drain_eth_rx(struct vhost_dev *vdev) > > { > > @@ -1355,25 +1395,8 @@ drain_eth_rx(struct vhost_dev *vdev) > > } > > } > > > > - if (builtin_net_driver) { > > - enqueue_count =3D vs_enqueue_pkts(vdev, VIRTIO_RXQ, > > - pkts, rx_count); > > - } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) { > > - uint16_t enqueue_fail =3D 0; > > - int16_t dma_id =3D dma_bind[vdev- > >vid].dmas[VIRTIO_RXQ].dev_id; > > - > > - complete_async_pkts(vdev); > > - enqueue_count =3D rte_vhost_submit_enqueue_burst(vdev- > >vid, > > - VIRTIO_RXQ, pkts, rx_count, dma_id, > 0); > > - > > - enqueue_fail =3D rx_count - enqueue_count; > > - if (enqueue_fail) > > - free_pkts(&pkts[enqueue_count], enqueue_fail); > > - > > - } else { > > - enqueue_count =3D rte_vhost_enqueue_burst(vdev->vid, > VIRTIO_RXQ, > > - pkts, rx_count); > > - } > > + enqueue_count =3D vdev_queue_ops[vdev- > >vid].enqueue_pkt_burst(vdev, > > + VIRTIO_RXQ, pkts, rx_count); > > > > if (enable_stats) { > > __atomic_add_fetch(&vdev->stats.rx_total_atomic, rx_count, > @@ > > -1382,10 +1405,31 @@ drain_eth_rx(struct vhost_dev *vdev) > > __ATOMIC_SEQ_CST); > > } > > > > - if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) > > + if (!dma_bind[vid2socketid[vdev- > > >vid]].dmas[VIRTIO_RXQ].async_enabled) > > free_pkts(pkts, rx_count); > > } > > > > +uint16_t async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mempool *mbuf_pool, > > + struct rte_mbuf **pkts, uint16_t count) { > > + int nr_inflight; > > + uint16_t dequeue_count; > > + uint16_t dma_id =3D dma_bind[vid2socketid[dev- > > >vid]].dmas[VIRTIO_TXQ].dev_id; > > + > > + dequeue_count =3D rte_vhost_async_try_dequeue_burst(dev->vid, > queue_id, > > + mbuf_pool, pkts, count, &nr_inflight, dma_id, 0); > > + > > + return dequeue_count; > > +} > > + > > +uint16_t sync_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mempool *mbuf_pool, > > + struct rte_mbuf **pkts, uint16_t count) { > > + return rte_vhost_dequeue_burst(dev->vid, queue_id, mbuf_pool, > pkts, > > count); > > +} > > + > > static __rte_always_inline void > > drain_virtio_tx(struct vhost_dev *vdev) { @@ -1393,13 +1437,8 @@ > > drain_virtio_tx(struct vhost_dev *vdev) > > uint16_t count; > > uint16_t i; > > > > - if (builtin_net_driver) { > > - count =3D vs_dequeue_pkts(vdev, VIRTIO_TXQ, mbuf_pool, > > - pkts, MAX_PKT_BURST); > > - } else { > > - count =3D rte_vhost_dequeue_burst(vdev->vid, VIRTIO_TXQ, > > - mbuf_pool, pkts, MAX_PKT_BURST); > > - } > > + count =3D vdev_queue_ops[vdev->vid].dequeue_pkt_burst(vdev, > > + VIRTIO_TXQ, mbuf_pool, pkts, > MAX_PKT_BURST); > > > > /* setup VMDq for the first packet */ > > if (unlikely(vdev->ready =3D=3D DEVICE_MAC_LEARNING) && count) > { @@ > > -1478,6 +1517,26 @@ switch_worker(void *arg __rte_unused) > > return 0; > > } > > > > +static void > > +vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t > > +queue_id) { > > + uint16_t n_pkt =3D 0; > > + int pkts_inflight; > > + > > + int16_t dma_id =3D dma_bind[vid2socketid[vdev- > > >vid]].dmas[queue_id].dev_id; > > + pkts_inflight =3D > > +rte_vhost_async_get_inflight_thread_unsafe(vdev->vid, > > queue_id); > > + > > + struct rte_mbuf *m_cpl[pkts_inflight]; > > + > > + while (pkts_inflight) { > > + n_pkt =3D rte_vhost_clear_queue_thread_unsafe(vdev->vid, > > queue_id, m_cpl, > > + pkts_inflight, dma_id, > 0); > > + free_pkts(m_cpl, n_pkt); > > + pkts_inflight =3D > > rte_vhost_async_get_inflight_thread_unsafe(vdev->vid, > > + > queue_id); > > + } > > +} > > + > > /* > > * Remove a device from the specific data core linked list and from th= e > > * main linked list. Synchronization occurs through the use of the > > @@ -1535,27 +1594,79 @@ destroy_device(int vid) > > vdev->vid); > > > > if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { > > - uint16_t n_pkt =3D 0; > > - int pkts_inflight; > > - int16_t dma_id =3D dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; > > - pkts_inflight =3D > rte_vhost_async_get_inflight_thread_unsafe(vid, > > VIRTIO_RXQ); > > - struct rte_mbuf *m_cpl[pkts_inflight]; > > - > > - while (pkts_inflight) { > > - n_pkt =3D rte_vhost_clear_queue_thread_unsafe(vid, > > VIRTIO_RXQ, > > - m_cpl, pkts_inflight, dma_id, > 0); > > - free_pkts(m_cpl, n_pkt); > > - pkts_inflight =3D > > rte_vhost_async_get_inflight_thread_unsafe(vid, > > - > VIRTIO_RXQ); > > - } > > - > > + vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); > > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); > > dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =3D false; > > } > > > > + if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { > > + vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); > > + rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); > > + dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled =3D false; > > + } > > + > > rte_free(vdev); > > } > > > > +static int > > +get_socketid_by_vid(int vid) > > +{ > > + int i; > > + char ifname[PATH_MAX]; > > + rte_vhost_get_ifname(vid, ifname, sizeof(ifname)); > > + > > + for (i =3D 0; i < nb_sockets; i++) { > > + char *file =3D socket_files + i * PATH_MAX; > > + if (strcmp(file, ifname) =3D=3D 0) > > + return i; > > + } > > + > > + return -1; > > +} > > + > > +static int > > +init_vhost_queue_ops(int vid) > > +{ > > + if (builtin_net_driver) { > > + vdev_queue_ops[vid].enqueue_pkt_burst =3D > builtin_enqueue_pkts; > > + vdev_queue_ops[vid].dequeue_pkt_burst =3D > builtin_dequeue_pkts; > > + } else { > > + if > (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled) > > + vdev_queue_ops[vid].enqueue_pkt_burst =3D > > async_enqueue_pkts; > > + else > > + vdev_queue_ops[vid].enqueue_pkt_burst =3D > > sync_enqueue_pkts; > > + > > + if > (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled) > > + vdev_queue_ops[vid].dequeue_pkt_burst =3D > > async_dequeue_pkts; > > + else > > + vdev_queue_ops[vid].dequeue_pkt_burst =3D > > sync_dequeue_pkts; > > + } > > + > > + return 0; > > +} > > + > > +static int > > +vhost_async_channel_register(int vid) { > > + int rx_ret =3D 0, tx_ret =3D 0; > > + > > + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id !=3D > > INVALID_DMA_ID) { > > + rx_ret =3D rte_vhost_async_channel_register(vid, VIRTIO_RXQ); > > + if (rx_ret =3D=3D 0) > > + > > dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled =3D > true; > > + } > > + > > + if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id !=3D > > INVALID_DMA_ID) { > > + tx_ret =3D rte_vhost_async_channel_register(vid, VIRTIO_TXQ); > > + if (tx_ret =3D=3D 0) > > + > > dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled =3D > true; > > + } > > + > > + return rx_ret | tx_ret; > > +} > > + > > + > > + > > /* > > * A new device is added to a data core. First the device is added to > > the main linked list > > * and then allocated to a specific data core. > > @@ -1567,6 +1678,8 @@ new_device(int vid) > > uint16_t i; > > uint32_t device_num_min =3D num_devices; > > struct vhost_dev *vdev; > > + int ret; > > + > > vdev =3D rte_zmalloc("vhost device", sizeof(*vdev), > > RTE_CACHE_LINE_SIZE); > > if (vdev =3D=3D NULL) { > > RTE_LOG(INFO, VHOST_DATA, > > @@ -1589,6 +1702,17 @@ new_device(int vid) > > } > > } > > > > + int socketid =3D get_socketid_by_vid(vid); > > + if (socketid =3D=3D -1) > > + return -1; > > + > > + init_vid2socketid_array(vid, socketid); > > + > > + ret =3D vhost_async_channel_register(vid); > > + > > + if (init_vhost_queue_ops(vid) !=3D 0) > > + return -1; > > + > > if (builtin_net_driver) > > vs_vhost_net_setup(vdev); > > > > @@ -1620,16 +1744,7 @@ new_device(int vid) > > "(%d) device has been added to data core %d\n", > > vid, vdev->coreid); > > > > - if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id !=3D INVALID_DMA_ID) { > > - int ret; > > - > > - ret =3D rte_vhost_async_channel_register(vid, VIRTIO_RXQ); > > - if (ret =3D=3D 0) > > - dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =3D > true; > > - return ret; > > - } > > - > > - return 0; > > + return ret; > > } > > > > static int > > @@ -1647,22 +1762,9 @@ vring_state_changed(int vid, uint16_t queue_id, > > int > > enable) > > if (queue_id !=3D VIRTIO_RXQ) > > return 0; > > > > - if (dma_bind[vid].dmas[queue_id].async_enabled) { > > - if (!enable) { > > - uint16_t n_pkt =3D 0; > > - int pkts_inflight; > > - pkts_inflight =3D > > rte_vhost_async_get_inflight_thread_unsafe(vid, queue_id); > > - int16_t dma_id =3D > dma_bind[vid].dmas[VIRTIO_RXQ].dev_id; > > - struct rte_mbuf *m_cpl[pkts_inflight]; > > - > > - while (pkts_inflight) { > > - n_pkt =3D > rte_vhost_clear_queue_thread_unsafe(vid, > > queue_id, > > - m_cpl, pkts_inflight, > dma_id, 0); > > - free_pkts(m_cpl, n_pkt); > > - pkts_inflight =3D > > rte_vhost_async_get_inflight_thread_unsafe(vid, > > - > > queue_id); > > - } > > - } > > + if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { > > + if (!enable) > > + vhost_clear_queue_thread_unsafe(vdev, queue_id); > > } > > > > return 0; > > @@ -1887,7 +1989,7 @@ main(int argc, char *argv[]) > > for (i =3D 0; i < nb_sockets; i++) { > > char *file =3D socket_files + i * PATH_MAX; > > > > - if (dma_count) > > + if (dma_count && get_async_flag_by_socketid(i) !=3D 0) > > flags =3D flags | RTE_VHOST_USER_ASYNC_COPY; > > > > ret =3D rte_vhost_driver_register(file, flags); diff --git > > a/examples/vhost/main.h b/examples/vhost/main.h index > > e7f395c3c9..2fcb8376c5 100644 > > --- a/examples/vhost/main.h > > +++ b/examples/vhost/main.h > > @@ -61,6 +61,19 @@ struct vhost_dev { > > struct vhost_queue queues[MAX_QUEUE_PAIRS * 2]; } > > __rte_cache_aligned; > > > > +typedef uint16_t (*vhost_enqueue_burst_t)(struct vhost_dev *dev, > > + uint16_t queue_id, struct rte_mbuf **pkts, > > + uint32_t count); > > + > > +typedef uint16_t (*vhost_dequeue_burst_t)(struct vhost_dev *dev, > > + uint16_t queue_id, struct rte_mempool *mbuf_pool, > > + struct rte_mbuf **pkts, uint16_t count); > > + > > +struct vhost_queue_ops { > > + vhost_enqueue_burst_t enqueue_pkt_burst; > > + vhost_dequeue_burst_t dequeue_pkt_burst; }; > > + > > TAILQ_HEAD(vhost_dev_tailq_list, vhost_dev); > > > > > > @@ -87,6 +100,7 @@ struct dma_info { > > > > struct dma_for_vhost { > > struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2]; > > + uint32_t async_flag; > > }; > > > > /* we implement non-extra virtio net features */ @@ -97,7 +111,19 @@ > > void vs_vhost_net_remove(struct vhost_dev *dev); uint16_t > > vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > struct rte_mbuf **pkts, uint32_t count); > > > > -uint16_t vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > - struct rte_mempool *mbuf_pool, > > - struct rte_mbuf **pkts, uint16_t count); > > +uint16_t builtin_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id= , > > + struct rte_mbuf **pkts, uint32_t count); uint16_t > > +builtin_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mempool *mbuf_pool, > > + struct rte_mbuf **pkts, uint16_t count); uint16_t > > +sync_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mbuf **pkts, uint32_t count); uint16_t > > +sync_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mempool *mbuf_pool, > > + struct rte_mbuf **pkts, uint16_t count); uint16_t > > +async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mbuf **pkts, uint32_t count); uint16_t > > +async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mempool *mbuf_pool, > > + struct rte_mbuf **pkts, uint16_t count); > > #endif /* _MAIN_H_ */ > > diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c > > index 9064fc3a82..2432a96566 100644 > > --- a/examples/vhost/virtio_net.c > > +++ b/examples/vhost/virtio_net.c > > @@ -238,6 +238,13 @@ vs_enqueue_pkts(struct vhost_dev *dev, uint16_t > > queue_id, > > return count; > > } > > > > +uint16_t > > +builtin_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mbuf **pkts, uint32_t count) { > > + return vs_enqueue_pkts(dev, queue_id, pkts, count); } > > + > > static __rte_always_inline int > > dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, > > struct rte_mbuf *m, uint16_t desc_idx, @@ -363,7 +370,7 @@ > > dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, > > return 0; > > } > > > > -uint16_t > > +static uint16_t > > vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > > count) > > { > > @@ -440,3 +447,10 @@ vs_dequeue_pkts(struct vhost_dev *dev, uint16_t > > queue_id, > > > > return i; > > } > > + > > +uint16_t > > +builtin_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, > > + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > > count) > > +{ > > + return vs_dequeue_pkts(dev, queue_id, mbuf_pool, pkts, count); } > > -- > > 2.17.1