From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9BBEA0579; Thu, 8 Apr 2021 14:01:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E70940698; Thu, 8 Apr 2021 14:01:26 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id CA90140138 for ; Thu, 8 Apr 2021 14:01:24 +0200 (CEST) IronPort-SDR: NGVVe70pxi6UfDFBfp8+0yEZd2y0t7K8AUXDWvQUdIJoWHCOXIkm0a5i98/FMoui3PQQWWRHj4 C0UhyinAGR5w== X-IronPort-AV: E=McAfee;i="6000,8403,9947"; a="193565160" X-IronPort-AV: E=Sophos;i="5.82,206,1613462400"; d="scan'208";a="193565160" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Apr 2021 05:01:22 -0700 IronPort-SDR: KXYdnTPVVbNgCon7UJhGAO1ze6Et4zcV2xRYc8+pOCSQ33dv2MSEzy5WbKnHGgL+VL9q5gZOc2 A230fsVDY+kw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,206,1613462400"; d="scan'208";a="519821548" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by fmsmga001.fm.intel.com with ESMTP; 08 Apr 2021 05:01:22 -0700 Received: from orsmsx609.amr.corp.intel.com (10.22.229.22) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Thu, 8 Apr 2021 05:01:21 -0700 Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by ORSMSX609.amr.corp.intel.com (10.22.229.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Thu, 8 Apr 2021 05:01:21 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Thu, 8 Apr 2021 05:01:21 -0700 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.102) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Thu, 8 Apr 2021 05:01:21 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DGEi+M5bfomRXPn80QfBi6oRoawvqQF/MZ7JLREIUc4jmofQ7JeZt+kV05E4hGKfwG8XZl8FgMv1+qkpGuAk4Xg6FReLm93JI68/GjXeTdRlEeJ9RIXB8i+4O0uHVzZcJOsyWEz68bdNN6w7DpiRE4fq1ItHSDeeBeGrG7mJ/8UUyoKHetxpPLTIiSTAHf6fFiVoYjh31PxA7sEj72TOgA+xu76UXA7U5HelOhypoHQoIYxDG2DeFHwYQm8jyeHxRphoEQFQ5jVc7kjlEIkQ2aDJAcL906h8tfcPCiA4RXr7AdCw6s6En5cenv3qJ4mKqIhFQuaKhkbtL0wKY6m8bA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HVmW2UKThvSE5Way9P9flgxTEeUVmLrxjCZWDLkxwkY=; b=G/p7GkD1uzzxi3CFHMq3m1vl54Y3PesCaSm/JTQZEy2cu2uAqWYiTuwynK0ZafynsXRu8my9pV/UppL9pSqf2BzKevSwTPHpCYIXjwPkXa5GcoxsH/KgevQZK8tOR7c4jtioWGlWH2zpBJDOlrPePQD0s+SRQjViY9cN3YvaL0COzmLethM12GE0uRILZ5Yxrv0tcD/+wQY+KEKV6UWD6GYL5fDwPFK7FzLj2hk6P6DtJV5BMTkuX55coEJ3WyqMnQTmXTGP72fo9dqU6RshdeFcVV6KDzTMs85uNDnnitLQU6vu63zRqss7rnixZbdu0kx7ezrBlucVY/L8uq0low== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HVmW2UKThvSE5Way9P9flgxTEeUVmLrxjCZWDLkxwkY=; b=ZnvtJRo5mxUDdKPkcZLSLtIwqceUr04fK1B1XRxiB2t0pKwPhaHlrfeVIEdQogwwQGyzJtfb8d/47UwF9+1DObYbPZsrJ8EN/LbTJrqCoxq/SOOEZHfK9k0aGClXfqZEDEGXwpqprsLDt4doluBC6RzvCwyVyfs5WOpvhaNY5RE= Received: from SJ0PR11MB5006.namprd11.prod.outlook.com (2603:10b6:a03:2db::22) by BYAPR11MB2919.namprd11.prod.outlook.com (2603:10b6:a03:8d::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4020.18; Thu, 8 Apr 2021 12:01:18 +0000 Received: from SJ0PR11MB5006.namprd11.prod.outlook.com ([fe80::522:5b2:4210:a4b3]) by SJ0PR11MB5006.namprd11.prod.outlook.com ([fe80::522:5b2:4210:a4b3%7]) with mapi id 15.20.4020.018; Thu, 8 Apr 2021 12:01:18 +0000 From: "Jiang, Cheng1" To: "Hu, Jiayu" , "maxime.coquelin@redhat.com" , "Xia, Chenbo" CC: "dev@dpdk.org" , "Yang, YvonneX" , "Wang, Yinan" Thread-Topic: [PATCH v3] vhost: add support for packed ring in async vhost Thread-Index: AQHXJjmqOvO1iabvkECfao+749O/0aqooYyAgAHtEnA= Date: Thu, 8 Apr 2021 12:01:18 +0000 Message-ID: References: <20210317085426.10119-1-Cheng1.jiang@intel.com> <20210331140629.45066-1-Cheng1.jiang@intel.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.55.46.54] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 970a44a1-1883-42ad-c216-08d8fa860503 x-ms-traffictypediagnostic: BYAPR11MB2919: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8882; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Do2qdvofCKMODRTyWP80ggW+EbYiR7/ZtATdWirNrL0if+OOXh2FIvmSr6gxvuSuWzk6hcr7FIXKMtenSIltoG8i6i6D5den0LQEdvvT4x0j1lYu36lUQzcg3OIjCwaaSfTI3EPPeutwH0aqVU95dFb3GQiaz7HpgZown5e5I2oITof2JsyEYgmqbIb9EdIy+wdQl+VBTmtifZEMZTrbCU7nS2f34I680ygLL7MuoG+3HjA/ziXCUROdSrz9jw6Es1Eg/ONtc5CBUVk8L1d33JMcAVx/fg/iXvfASEnY60nvZpnkz+UprqCj+x6loxrRQgdAX/PRGqg7gEmGy8DDqw5sU8AGKcftKvLQWh5S1oTQa0LO1aGP+WJ4YwonsSCY2+yXYfG1hSwK5uRa1XJVUBKKnZDaFSMtLYHDuQjej9tD8ZhPk+uAVEd4X+m6C3oWnY+PCGX+Poj4TJuDblbxVUlI2UdBzF4dKCJ1sr5KmnW/pFuOitthPi7JDkDoT3rU30wM4seKwNJv0rPb7WKIIQtEQOp5jZdvfoQhTHESkLoC7GeVuNBMJ2rDN+hBjRk3ZTErkwMiJfnXmXZ1l2cgj1b0Z8pJf90H6/1NMWXQuYGGG7QZNDUfATK9QabVqIz1H6fK+EvO10HDAKoeEdkes2YJ1NbD4T6rDcA80k/8o6s= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR11MB5006.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(376002)(396003)(39860400002)(346002)(366004)(136003)(38100700001)(107886003)(7696005)(71200400001)(83380400001)(55016002)(9686003)(478600001)(8936002)(6636002)(4326008)(186003)(26005)(8676002)(53546011)(6506007)(30864003)(86362001)(64756008)(66476007)(52536014)(66556008)(66946007)(5660300002)(110136005)(76116006)(316002)(54906003)(2906002)(66446008)(33656002)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?K3T5ltg1iRx3F7hjafsTcrool4E7E4xRzrGaEa6Eb8C6A+KsoRBtQIWeVOQH?= =?us-ascii?Q?uYGagMhdFAesD35RxbW0rz6bvljFbqgNgiWwT+TidkBlHQg83BAwIWE5GzPk?= =?us-ascii?Q?dO1zoQFoJ08sYqvkpKAvSlBg7zPtluzTc0XqJ6a/FfQR4oO6m9pZ4ymfTwl8?= =?us-ascii?Q?zoOQg/sblQiU/Wsz+9dSCmKp8J8hyWFxzyr+ZHjJZe2T7c0yA9QjekVi7725?= =?us-ascii?Q?ATyOxbBJagZBdYCrYTL/LO3SYJt0lUt02I94eFDMa40wIylmoEa0yGLs9NQh?= =?us-ascii?Q?K85cG+6EF0/Q36/REbp3xKwkPZ+/tuu2aY7V/kxcLYTcE/z0Z9c8PJYaANzk?= =?us-ascii?Q?n2Z4mjNRZS3eXD9wrL/RiLophR7GJK+90AJCanEbr+nVuU/TH3vhtDCli0+G?= =?us-ascii?Q?KOSIrohPuxeVFPOoB8IyBzr4bc8+p4TDp+J7rgqLeOuhR67gsN4YFrmfSsyA?= =?us-ascii?Q?GpDHHwdnmogqrnxCXW33e9adXGhr47XdtBGQMqmYoGplFtbI2sVtsRsp5Zhn?= =?us-ascii?Q?1pX/TnUn3UJTkZrEsr7Xjy6qnZfQW4IKljssUj4SZKK5JnFDLPYWb0ehe574?= =?us-ascii?Q?lUzFGxmij9ZOUZSfaeHXO8lZ03rXPYXraqNq5Asp1ohlzkaUJu9DFynYPOCA?= =?us-ascii?Q?cY3iI/tI96X36SmayVY/TiWAAtfA3EKaBGAVCLwbRiGWCax07QWfnHfQQe3g?= =?us-ascii?Q?19sbMD1lFMkDEcmyHg8H5LWDAFAqCiEfjUk2w2BZqw+VU+b8TOzip4prd3B+?= =?us-ascii?Q?LcQ5Z0oSHgAORZeBv6ymstHV4aPY4NDlKCcvn5UQLGnVtJlyjUaAfeJeNgXw?= =?us-ascii?Q?OKxviifko3uaMnpAbOz355x6rP0mDTNXgmydDs/4JX3glBjGCG+3ID5gbYg0?= =?us-ascii?Q?H1N8JR7GM5jKo6EL4p95iTqwKkpkz3gPYcWgHCIV3IGPlVVew85HgffasGW4?= =?us-ascii?Q?V8sprjxgmaH8hh+yAosDpRIlGzlIgwAlyMpcOtJ1Pgy+gPuCUIBL0YNm7P/H?= =?us-ascii?Q?t7lwuD9iyBdqUr76X9GPSud8q36UytQ8K+8IvFyCq3c6XnqBGc1x5RCZHdPA?= =?us-ascii?Q?Z+oS3FrnotZqDor1Bz5hojSXu7iuTccUkdLpP8gDVjEjsZRALaf7Hu6hbAKJ?= =?us-ascii?Q?bRffJSsPFy629J/R1/paBLH2ksOSlmV8rMPiockuj89mCs+MCPgd1tE7+o/L?= =?us-ascii?Q?flGuJgkEEqd7+ZsLoEJhc7viULYCFzDsZuf4ASAslMSLmBmlk+uQa83C/vbi?= =?us-ascii?Q?oewIUr5oQQCQgUyN6y2hGyOTZ8zciDIT7kbUNaHB6bGp/LSyd0WY6Tjc9+K7?= =?us-ascii?Q?TYA=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SJ0PR11MB5006.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 970a44a1-1883-42ad-c216-08d8fa860503 X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Apr 2021 12:01:18.8215 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: oj5TGD5bZJkcyNOBQYkVw7ui+zX7glPcsI0BfJAhPMMsiscLUNM2H7zRn5RbY8xipiXO2WxKVUTUPCXOQ2OSRA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR11MB2919 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v3] vhost: add support for packed ring in async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Jiayu, > -----Original Message----- > From: Hu, Jiayu > Sent: Wednesday, April 7, 2021 2:27 PM > To: Jiang, Cheng1 ; maxime.coquelin@redhat.com; > Xia, Chenbo > Cc: dev@dpdk.org; Yang, YvonneX ; Wang, Yinan > > Subject: RE: [PATCH v3] vhost: add support for packed ring in async vhost >=20 > Hi Cheng, >=20 > Some comments are inline. >=20 > > -----Original Message----- > > From: Jiang, Cheng1 > > Sent: Wednesday, March 31, 2021 10:06 PM > > To: maxime.coquelin@redhat.com; Xia, Chenbo > > Cc: dev@dpdk.org; Hu, Jiayu ; Yang, YvonneX > > ; Wang, Yinan ; Jiang, > > Cheng1 > > Subject: [PATCH v3] vhost: add support for packed ring in async vhost > > > > For now async vhost data path only supports split ring structure. In > > order to make async vhost compatible with virtio 1.1 spec this patch > > enables packed ring in async vhost data path. > > > > Signed-off-by: Cheng Jiang > > --- > > v3: > > * fix error handler for DMA-copy packet > > * remove variables that are no longer needed > > v2: > > * fix wrong buffer index in rte_vhost_poll_enqueue_completed() > > * add async_buffers_packed memory free in vhost_free_async_mem() > > > > lib/librte_vhost/rte_vhost_async.h | 1 + > > lib/librte_vhost/vhost.c | 24 +- > > lib/librte_vhost/vhost.h | 7 +- > > lib/librte_vhost/virtio_net.c | 463 +++++++++++++++++++++++++++-- > > 4 files changed, 457 insertions(+), 38 deletions(-) > > > > diff --git a/lib/librte_vhost/rte_vhost_async.h > > b/lib/librte_vhost/rte_vhost_async.h > > index c855ff875..6faa31f5a 100644 > > --- a/lib/librte_vhost/rte_vhost_async.h > > +++ b/lib/librte_vhost/rte_vhost_async.h > > @@ -89,6 +89,7 @@ struct rte_vhost_async_channel_ops { struct > > async_inflight_info { struct rte_mbuf *mbuf; uint16_t descs; /* num > > of descs inflight */ > > +uint16_t nr_buffers; /* num of buffers inflight for packed ring */ > > }; > > > > /** > > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index > > 52ab93d1e..51b44d6f2 100644 > > --- a/lib/librte_vhost/vhost.c > > +++ b/lib/librte_vhost/vhost.c > > @@ -330,15 +330,20 @@ vhost_free_async_mem(struct vhost_virtqueue > > *vq) > > { > > if (vq->async_pkts_info) > > rte_free(vq->async_pkts_info); > > -if (vq->async_descs_split) > > +if (vq->async_buffers_packed) { > > +rte_free(vq->async_buffers_packed); > > +vq->async_buffers_packed =3D NULL; > > +} else { > > rte_free(vq->async_descs_split); > > +vq->async_descs_split =3D NULL; > > +} > > + > > if (vq->it_pool) > > rte_free(vq->it_pool); > > if (vq->vec_pool) > > rte_free(vq->vec_pool); > > > > vq->async_pkts_info =3D NULL; > > -vq->async_descs_split =3D NULL; > > vq->it_pool =3D NULL; > > vq->vec_pool =3D NULL; > > } > > @@ -1603,9 +1608,9 @@ int rte_vhost_async_channel_register(int vid, > > uint16_t queue_id, return -1; > > > > /* packed queue is not supported */ > > -if (unlikely(vq_is_packed(dev) || !f.async_inorder)) { > > +if (unlikely(!f.async_inorder)) { > > VHOST_LOG_CONFIG(ERR, > > -"async copy is not supported on packed queue or non-inorder mode " > > +"async copy is not supported on non-inorder mode " > > "(vid %d, qid: %d)\n", vid, queue_id); return -1; } @@ -1643,10 > > +1648,17 @@ int rte_vhost_async_channel_register(int vid, uint16_t > > queue_id, vq->vec_pool =3D rte_malloc_socket(NULL, > VHOST_MAX_ASYNC_VEC > > * sizeof(struct iovec), RTE_CACHE_LINE_SIZE, node); > > -vq->async_descs_split =3D rte_malloc_socket(NULL, > > +if (vq_is_packed(dev)) { > > +vq->async_buffers_packed =3D rte_malloc_socket(NULL, size * > > +vq->sizeof(struct vring_used_elem_packed), > > +RTE_CACHE_LINE_SIZE, node); > > +} else { > > +vq->async_descs_split =3D rte_malloc_socket(NULL, > > vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE, > > node); -if (!vq->async_descs_split || !vq->async_pkts_info || > > +} > > + > > +if (!vq->async_pkts_info || >=20 > Need to check if malloc fails for async_buffers_packed. Sure, It will be fixed in the next version. >=20 > > !vq->it_pool || !vq->vec_pool) { > > vhost_free_async_mem(vq); > > VHOST_LOG_CONFIG(ERR, > > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index > > 658f6fc28..d6324fbf8 100644 > > --- a/lib/librte_vhost/vhost.h > > +++ b/lib/librte_vhost/vhost.h > > @@ -206,9 +206,14 @@ struct vhost_virtqueue { uint16_tasync_pkts_idx; > > uint16_tasync_pkts_inflight_n; uint16_tasync_last_pkts_n; -struct > > vring_used_elem *async_descs_split; > > +union { > > +struct vring_used_elem *async_descs_split; struct > > +vring_used_elem_packed *async_buffers_packed; }; > > uint16_t async_desc_idx; > > +uint16_t async_packed_buffer_idx; > > uint16_t last_async_desc_idx; > > +uint16_t last_async_buffer_idx; > > > > /* vq async features */ > > boolasync_inorder; > > diff --git a/lib/librte_vhost/virtio_net.c > > b/lib/librte_vhost/virtio_net.c index 583bf379c..fa2dfde02 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -363,8 +363,7 @@ > > vhost_shadow_dequeue_single_packed_inorder(struct vhost_virtqueue > *vq, > > } > > > > static __rte_always_inline void > > -vhost_shadow_enqueue_single_packed(struct virtio_net *dev, > > - struct vhost_virtqueue *vq, > > +vhost_shadow_enqueue_packed(struct vhost_virtqueue *vq, > > uint32_t len[], > > uint16_t id[], > > uint16_t count[], > > @@ -382,6 +381,17 @@ vhost_shadow_enqueue_single_packed(struct > > virtio_net *dev, > > vq->shadow_aligned_idx +=3D count[i]; > > vq->shadow_used_idx++; > > } > > +} > > + > > +static __rte_always_inline void > > +vhost_shadow_enqueue_single_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + uint32_t len[], > > + uint16_t id[], > > + uint16_t count[], > > + uint16_t num_buffers) > > +{ > > +vhost_shadow_enqueue_packed(vq, len, id, count, num_buffers); > > > > if (vq->shadow_aligned_idx >=3D PACKED_BATCH_SIZE) { > > do_data_copy_enqueue(dev, vq); @@ -1452,6 +1462,73 @@ > > virtio_dev_rx_async_get_info_idx(uint16_t > > pkts_idx, > > (vq_size - n_inflight + pkts_idx) & (vq_size - 1); } > > > > +static __rte_always_inline void > > +vhost_update_used_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct vring_used_elem_packed > > *shadow_ring, > > + uint16_t count) > > +{ > > +if (count =3D=3D 0) > > +return; > > +int i; > > +uint16_t used_idx =3D vq->last_used_idx; uint16_t head_idx =3D > > +vq->last_used_idx; uint16_t head_flags =3D 0; > > + > > +/* Split loop in two to save memory barriers */ for (i =3D 0; i < > > +count; i++) { > > +vq->desc_packed[used_idx].id =3D shadow_ring[i].id; > > +vq->desc_packed[used_idx].len =3D shadow_ring[i].len; > > + > > +used_idx +=3D shadow_ring[i].count; > > +if (used_idx >=3D vq->size) > > +used_idx -=3D vq->size; > > +} > > + > > +/* The ordering for storing desc flags needs to be enforced. */ > > +rte_atomic_thread_fence(__ATOMIC_RELEASE); > > + > > +for (i =3D 0; i < count; i++) { > > +uint16_t flags; > > + > > +if (vq->shadow_used_packed[i].len) > > +flags =3D VRING_DESC_F_WRITE; > > +else > > +flags =3D 0; > > + > > +if (vq->used_wrap_counter) { > > +flags |=3D VRING_DESC_F_USED; > > +flags |=3D VRING_DESC_F_AVAIL; > > +} else { > > +flags &=3D ~VRING_DESC_F_USED; > > +flags &=3D ~VRING_DESC_F_AVAIL; > > +} > > + > > +if (i > 0) { > > +vq->desc_packed[vq->last_used_idx].flags =3D flags; > > + > > +vhost_log_cache_used_vring(dev, vq, > > +vq->last_used_idx * > > +sizeof(struct vring_packed_desc), > > +sizeof(struct vring_packed_desc)); > > +} else { > > +head_idx =3D vq->last_used_idx; > > +head_flags =3D flags; > > +} > > + > > +vq_inc_last_used_packed(vq, shadow_ring[i].count); } > > + > > +vq->desc_packed[head_idx].flags =3D head_flags; > > + > > +vhost_log_cache_used_vring(dev, vq, > > +head_idx * > > +sizeof(struct vring_packed_desc), > > +sizeof(struct vring_packed_desc)); > > + > > +vhost_log_cache_sync(dev, vq); >=20 > Async enqueue of packed ring has no support of live migration. > The above code is not needed. It will be removed. >=20 > > +} > > + > > static __rte_noinline uint32_t > > virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct > > vhost_virtqueue *vq, uint16_t queue_id, @@ -1633,12 +1710,292 @@ > > virtio_dev_rx_async_submit_split(struct > > virtio_net *dev, > > return pkt_idx; > > } > > > > +static __rte_always_inline int > > +vhost_enqueue_async_single_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct rte_mbuf *pkt, > > + struct buf_vector *buf_vec, > > + uint16_t *nr_descs, > > + uint16_t *nr_buffers, > > + struct iovec *src_iovec, struct iovec *dst_iovec, > > + struct rte_vhost_iov_iter *src_it, > > + struct rte_vhost_iov_iter *dst_it) { uint16_t nr_vec =3D 0; > > +uint16_t avail_idx =3D vq->last_avail_idx; uint16_t max_tries, tries = =3D > > +0; uint16_t buf_id =3D 0; uint32_t len =3D 0; uint16_t desc_count; > > +uint32_t size =3D pkt->pkt_len + sizeof(struct > > virtio_net_hdr_mrg_rxbuf); > > +uint32_t buffer_len[vq->size]; > > +uint16_t buffer_buf_id[vq->size]; > > +uint16_t buffer_desc_count[vq->size]; *nr_buffers =3D 0; > > + > > +if (rxvq_is_mergeable(dev)) > > +max_tries =3D vq->size - 1; > > +else > > +max_tries =3D 1; > > + > > +while (size > 0) { > > +/* > > + * if we tried all available ring items, and still > > + * can't get enough buf, it means something abnormal > > + * happened. > > + */ > > +if (unlikely(++tries > max_tries)) > > +return -1; > > + > > +if (unlikely(fill_vec_buf_packed(dev, vq, avail_idx, &desc_count, > > +buf_vec, &nr_vec, &buf_id, &len, > > +VHOST_ACCESS_RW) < 0)) > > +return -1; > > + > > +len =3D RTE_MIN(len, size); > > +size -=3D len; > > + > > +buffer_len[*nr_buffers] =3D len; > > +buffer_buf_id[*nr_buffers] =3D buf_id; > > +buffer_desc_count[*nr_buffers] =3D desc_count; *nr_buffers +=3D 1; > > + > > +*nr_descs +=3D desc_count; > > +avail_idx +=3D desc_count; > > +if (avail_idx >=3D vq->size) > > +avail_idx -=3D vq->size; > > +} > > + > > +if (async_mbuf_to_desc(dev, vq, pkt, buf_vec, nr_vec, *nr_buffers, > > +src_iovec, dst_iovec, src_it, dst_it) < 0) return -1; > > + > > +vhost_shadow_enqueue_packed(vq, buffer_len, buffer_buf_id, > > + buffer_desc_count, *nr_buffers); > > + > > +return 0; > > +} > > + > > +static __rte_always_inline int16_t > > +virtio_dev_rx_async_single_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct rte_mbuf *pkt, > > + uint16_t *nr_descs, uint16_t *nr_buffers, > > + struct iovec *src_iovec, struct iovec *dst_iovec, > > + struct rte_vhost_iov_iter *src_it, > > + struct rte_vhost_iov_iter *dst_it) { struct buf_vector > > +buf_vec[BUF_VECTOR_MAX]; *nr_descs =3D 0; *nr_buffers =3D 0; > > + > > +if (unlikely(vhost_enqueue_async_single_packed(dev, vq, pkt, > > buf_vec, > > + nr_descs, > > + nr_buffers, > > + src_iovec, dst_iovec, > > + src_it, dst_it) < 0)) { > > +VHOST_LOG_DATA(DEBUG, > > +"(%d) failed to get enough desc from vring\n", > > +dev->vid); > > +return -1; > > +} > > + > > +VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end > > index %d\n", > > +dev->vid, vq->last_avail_idx, > > +vq->last_avail_idx + *nr_descs); > > + > > +return 0; > > +} > > + > > +static __rte_noinline uint32_t > > +virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct > > +vhost_virtqueue *vq, uint16_t queue_id, struct rte_mbuf **pkts, > > +uint32_t count, struct rte_mbuf **comp_pkts, uint32_t *comp_count) { > > +uint32_t pkt_idx =3D 0, pkt_burst_idx =3D 0; uint16_t num_buffers; > > +uint16_t num_desc; > > + > > +struct rte_vhost_iov_iter *it_pool =3D vq->it_pool; struct iovec > > +*vec_pool =3D vq->vec_pool; struct rte_vhost_async_desc > > +tdes[MAX_PKT_BURST]; struct iovec *src_iovec =3D vec_pool; struct iove= c > > +*dst_iovec =3D vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); struct > > +rte_vhost_iov_iter *src_it =3D it_pool; struct rte_vhost_iov_iter > > +*dst_it =3D it_pool + 1; uint16_t slot_idx =3D 0; uint16_t segs_await = =3D > > +0; struct async_inflight_info *pkts_info =3D vq->async_pkts_info; > > +uint32_t n_pkts =3D 0, pkt_err =3D 0; uint32_t num_async_pkts =3D 0, > > +num_done_pkts =3D 0; > > + > > +rte_prefetch0(&vq->desc[vq->last_avail_idx & (vq->size - 1)]); > > + > > +for (pkt_idx =3D 0; pkt_idx < count; pkt_idx++) { if > > +(unlikely(virtio_dev_rx_async_single_packed(dev, vq, pkts[pkt_idx], > > +&num_desc, &num_buffers, src_iovec, dst_iovec, src_it, dst_it) < 0)) > > +{ break; } > > + > > +VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end > > index %d\n", > > +dev->vid, vq->last_avail_idx, > > +vq->last_avail_idx + num_desc); > > + > > +slot_idx =3D (vq->async_pkts_idx + num_async_pkts) & (vq->size - 1); i= f > > +(src_it->count) { uint16_t from, to; > > + > > +async_fill_desc(&tdes[pkt_burst_idx++], src_it, dst_it); > > +pkts_info[slot_idx].descs =3D num_desc; pkts_info[slot_idx].nr_buffers > > +=3D num_buffers; pkts_info[slot_idx].mbuf =3D pkts[pkt_idx]; > > +num_async_pkts++; src_iovec +=3D src_it->nr_segs; dst_iovec +=3D > > +dst_it->nr_segs; src_it +=3D 2; dst_it +=3D 2; segs_await +=3D > > +src_it->nr_segs; > > + > > +/** > > + * recover shadow used ring and keep DMA-occupied > > + * descriptors. > > + */ > > +from =3D vq->shadow_used_idx - num_buffers; > > +to =3D vq->async_packed_buffer_idx & (vq->size - 1); > > +if (num_buffers + to <=3D vq->size) { > > +rte_memcpy(&vq->async_buffers_packed[to], > > +&vq->shadow_used_packed[from], > > +num_buffers * > > +sizeof(struct > > vring_used_elem_packed)); > > +} else { > > +int size =3D vq->size - to; > > + > > +rte_memcpy(&vq->async_buffers_packed[to], > > +&vq->shadow_used_packed[from], > > +size * > > +sizeof(struct > > vring_used_elem_packed)); > > +rte_memcpy(vq->async_buffers_packed, > > +&vq->shadow_used_packed[from + > > +size], (num_buffers - size) * > > +sizeof(struct > > vring_used_elem_packed)); > > +} > > +vq->async_packed_buffer_idx +=3D num_buffers; > > +vq->shadow_used_idx -=3D num_buffers; > > +} else > > +comp_pkts[num_done_pkts++] =3D pkts[pkt_idx]; > > + > > +vq_inc_last_avail_packed(vq, num_desc); > > + > > +/* > > + * conditions to trigger async device transfer: > > + * - buffered packet number reaches transfer threshold > > + * - unused async iov number is less than max vhost vector > > + */ > > +if (unlikely(pkt_burst_idx >=3D > > VHOST_ASYNC_BATCH_THRESHOLD || > > +((VHOST_MAX_ASYNC_VEC >> 1) - segs_await < > > +BUF_VECTOR_MAX))) { > > +n_pkts =3D vq->async_ops.transfer_data(dev->vid, > > +queue_id, tdes, 0, pkt_burst_idx); > > +src_iovec =3D vec_pool; > > +dst_iovec =3D vec_pool + (VHOST_MAX_ASYNC_VEC >> > > 1); > > +src_it =3D it_pool; > > +dst_it =3D it_pool + 1; > > +segs_await =3D 0; > > +vq->async_pkts_inflight_n +=3D n_pkts; > > + > > +if (unlikely(n_pkts < pkt_burst_idx)) { > > +/* > > + * log error packets number here and do > > actual > > + * error processing when applications poll > > + * completion > > + */ > > +pkt_err =3D pkt_burst_idx - n_pkts; > > +pkt_burst_idx =3D 0; > > +pkt_idx++; > > +break; > > +} > > + > > +pkt_burst_idx =3D 0; > > +} > > +} > > + > > +if (pkt_burst_idx) { > > +n_pkts =3D vq->async_ops.transfer_data(dev->vid, > > +queue_id, tdes, 0, pkt_burst_idx); > > +vq->async_pkts_inflight_n +=3D n_pkts; > > + > > +if (unlikely(n_pkts < pkt_burst_idx)) > > +pkt_err =3D pkt_burst_idx - n_pkts; > > +} > > + > > +do_data_copy_enqueue(dev, vq); > > + > > +if (unlikely(pkt_err)) { > > +uint16_t buffers_err =3D 0; > > +uint16_t async_buffer_idx; > > +uint16_t i; > > + > > +num_async_pkts -=3D pkt_err; > > +pkt_idx -=3D pkt_err; > > +/* calculate the sum of buffers of DMA-error packets. */ > > +while (pkt_err-- > 0) { > > +buffers_err +=3D > > +pkts_info[slot_idx & (vq->size - 1)].nr_buffers; > > +slot_idx--; > > +} > > + > > +vq->async_packed_buffer_idx -=3D buffers_err; > > +async_buffer_idx =3D vq->async_packed_buffer_idx; > > +/* set 0 to the length of descriptors of DMA-error packets */ > > +for (i =3D 0; i < buffers_err; i++) { > > +vq->async_buffers_packed[(async_buffer_idx + i) > > +& (vq->size - 1)].len =3D 0; > > +} > > +/* write back DMA-error descriptors to used ring */ > > +do { > > +uint16_t from =3D async_buffer_idx & (vq->size - 1); > > +uint16_t to =3D (from + buffers_err) & (vq->size - 1); > > + > > +if (to > from) { > > +vhost_update_used_packed(dev, vq, > > +vq->async_buffers_packed + from, > > +to - from); > > +buffers_err =3D 0; > > +} else { > > +vhost_update_used_packed(dev, vq, > > +vq->async_buffers_packed + from, > > +vq->size - from); > > +buffers_err -=3D vq->size - from; > > +} > > +} while (buffers_err > 0); > > +vhost_vring_call_packed(dev, vq); >=20 > Why notify front-end here? The error handling method will be changed in the next version, so this noti= fication will be removed. >=20 > > +num_done_pkts =3D pkt_idx - num_async_pkts; > > +} > > + > > +vq->async_pkts_idx +=3D num_async_pkts; > > +*comp_count =3D num_done_pkts; > > + > > +if (likely(vq->shadow_used_idx)) { > > +vhost_flush_enqueue_shadow_packed(dev, vq); > > +vhost_vring_call_packed(dev, vq); > > +} > > + > > +return pkt_idx; > > +} >=20 > virtio_dev_rx_async_submit_packed is too long and it has several parts ar= e > similar with split ring. I think you need to abstract common parts into i= nline > functions to make the code easier to read. I'm not sure which parts can be easily processed into functions. Maybe we c= an have a discussion offline.=20 >=20 > > + > > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, > > struct rte_mbuf **pkts, uint16_t count) > > { > > struct virtio_net *dev =3D get_device(vid); > > struct vhost_virtqueue *vq; > > -uint16_t n_pkts_cpl =3D 0, n_pkts_put =3D 0, n_descs =3D 0; > > +uint16_t n_pkts_cpl =3D 0, n_pkts_put =3D 0, n_descs =3D 0, n_buffers = =3D 0; > > uint16_t start_idx, pkts_idx, vq_size; > > struct async_inflight_info *pkts_info; > > uint16_t from, i; > > @@ -1680,53 +2037,96 @@ uint16_t > > rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, > > goto done; > > } > > > > -for (i =3D 0; i < n_pkts_put; i++) { > > -from =3D (start_idx + i) & (vq_size - 1); > > -n_descs +=3D pkts_info[from].descs; > > -pkts[i] =3D pkts_info[from].mbuf; > > +if (vq_is_packed(dev)) { > > +for (i =3D 0; i < n_pkts_put; i++) { > > +from =3D (start_idx + i) & (vq_size - 1); > > +n_buffers +=3D pkts_info[from].nr_buffers; > > +pkts[i] =3D pkts_info[from].mbuf; > > +} > > +} else { > > +for (i =3D 0; i < n_pkts_put; i++) { > > +from =3D (start_idx + i) & (vq_size - 1); > > +n_descs +=3D pkts_info[from].descs; > > +pkts[i] =3D pkts_info[from].mbuf; > > +} > > } > > + > > vq->async_last_pkts_n =3D n_pkts_cpl - n_pkts_put; > > vq->async_pkts_inflight_n -=3D n_pkts_put; > > > > if (likely(vq->enabled && vq->access_ok)) { > > -uint16_t nr_left =3D n_descs; > > uint16_t nr_copy; > > uint16_t to; > > > > /* write back completed descriptors to used ring */ > > -do { > > -from =3D vq->last_async_desc_idx & (vq->size - 1); > > -nr_copy =3D nr_left + from <=3D vq->size ? nr_left : > > -vq->size - from; > > -to =3D vq->last_used_idx & (vq->size - 1); > > - > > -if (to + nr_copy <=3D vq->size) { > > -rte_memcpy(&vq->used->ring[to], > > +if (vq_is_packed(dev)) { > > +uint16_t nr_left =3D n_buffers; > > +uint16_t to; > > +do { > > +from =3D vq->last_async_buffer_idx & > > +(vq->size - 1); > > +to =3D (from + nr_left) & (vq->size - 1); > > + > > +if (to > from) { > > +vhost_update_used_packed(dev, vq, > > +vq->async_buffers_packed + > > from, > > +to - from); > > +vq->last_async_buffer_idx +=3D nr_left; > > +nr_left =3D 0; > > +} else { > > +vhost_update_used_packed(dev, vq, > > +vq->async_buffers_packed + > > from, > > +vq->size - from); > > +vq->last_async_buffer_idx +=3D > > +vq->size - > > from; > > +nr_left -=3D vq->size - from; > > +} > > +} while (nr_left > 0); > > +vhost_vring_call_packed(dev, vq); > > +} else { > > +uint16_t nr_left =3D n_descs; > > +do { > > +from =3D vq->last_async_desc_idx & (vq->size - > > 1); > > +nr_copy =3D nr_left + from <=3D vq->size ? nr_left : > > +vq->size - from; > > +to =3D vq->last_used_idx & (vq->size - 1); > > + > > +if (to + nr_copy <=3D vq->size) { > > +rte_memcpy(&vq->used->ring[to], > > &vq- > > >async_descs_split[from], > > nr_copy * > > sizeof(struct > > vring_used_elem)); > > -} else { > > -uint16_t size =3D vq->size - to; > > +} else { > > +uint16_t size =3D vq->size - to; > > > > -rte_memcpy(&vq->used->ring[to], > > +rte_memcpy(&vq->used->ring[to], > > &vq- > > >async_descs_split[from], > > size * > > sizeof(struct > > vring_used_elem)); > > -rte_memcpy(vq->used->ring, > > +rte_memcpy(vq->used->ring, > > &vq->async_descs_split[from > > + > > size], (nr_copy - size) * > > sizeof(struct > > vring_used_elem)); > > -} > > +} > > + > > +vq->last_async_desc_idx +=3D nr_copy; > > +vq->last_used_idx +=3D nr_copy; > > +nr_left -=3D nr_copy; > > +} while (nr_left > 0); > > + > > +__atomic_add_fetch(&vq->used->idx, n_descs, > > +__ATOMIC_RELEASE); > > +vhost_vring_call_split(dev, vq); > > +} > > > > -vq->last_async_desc_idx +=3D nr_copy; > > -vq->last_used_idx +=3D nr_copy; > > -nr_left -=3D nr_copy; > > -} while (nr_left > 0); > > > > -__atomic_add_fetch(&vq->used->idx, n_descs, > > __ATOMIC_RELEASE); > > -vhost_vring_call_split(dev, vq); > > -} else > > -vq->last_async_desc_idx +=3D n_descs; > > + > > +} else { > > +if (vq_is_packed(dev)) > > +vq->last_async_buffer_idx +=3D n_buffers; > > +else > > +vq->last_async_desc_idx +=3D n_descs; > > +} >=20 > rte_vhost_poll_enqueue_completed is too long and not easy to read. Save > suggestion > as above. >=20 I can try to process some code into functions, but I'm not sure if this is = necessary, I will discuss it with you later. Thanks, Cheng > Thanks, > Jiayu >=20 > > > > done: > > rte_spinlock_unlock(&vq->access_lock); > > @@ -1767,9 +2167,10 @@ virtio_dev_rx_async_submit(struct virtio_net > > *dev, uint16_t queue_id, > > if (count =3D=3D 0) > > goto out; > > > > -/* TODO: packed queue not implemented */ > > if (vq_is_packed(dev)) > > -nb_tx =3D 0; > > +nb_tx =3D virtio_dev_rx_async_submit_packed(dev, > > +vq, queue_id, pkts, count, comp_pkts, > > +comp_count); > > else > > nb_tx =3D virtio_dev_rx_async_submit_split(dev, > > vq, queue_id, pkts, count, comp_pkts, > > -- > > 2.29.2 >=20