From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D41AEA034F; Mon, 29 Mar 2021 14:29:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6DD0940151; Mon, 29 Mar 2021 14:29:38 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A7D0140042 for ; Mon, 29 Mar 2021 14:29:36 +0200 (CEST) IronPort-SDR: V5/itBnFez3aN0Lxv/GkjlU42RfB68vtyLug4mhUwq6VwtMWB+DLY9bkUbdyU9IZ18oTF5YISB IOe00TxrWfEw== X-IronPort-AV: E=McAfee;i="6000,8403,9937"; a="191635262" X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="191635262" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2021 05:29:33 -0700 IronPort-SDR: 3ulKasTa/01LKXotVUuDvuKltMMg/Al7oIh8rUsLQSDPuk5MfNlEoH9zUVv1a5n/VJn5dnP4W2 M4ojo/atPbqg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,287,1610438400"; d="scan'208";a="609710134" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by fmsmga005.fm.intel.com with ESMTP; 29 Mar 2021 05:29:32 -0700 Received: from fmsmsx606.amr.corp.intel.com (10.18.126.86) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Mon, 29 Mar 2021 05:29:32 -0700 Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Mon, 29 Mar 2021 05:29:32 -0700 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.109) by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Mon, 29 Mar 2021 05:29:32 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HsDJP4TujluE7shHPB0pnT/CRxaAcgFRjoMenuYrEibUMvQTkh10pInRq3/h4F1A3Xd4VUwyJMUPKBZ/rZIfMACAUahd+ycYsb4O/Koy0SeJTtwGAlzs446BTcjMqIPjrVxzufD2nAmwB3gJf2RM5soPx6v66zy/Cs7s8aJhk9movHJxROngWLD0xGVE0+0eRz42nYP2tj3bvvrI+RrvJBfWfKQZ0hgTLMwzjX1ZkTLKKnfh8NyVrMOfeDDuMG+cvwnOhtJKn51vLXBLHsy8q5NzcqCBkzCTk6qjvOWHsFZUy9tb14SfTdpBhT2wkc42mV0JTcqyoHV7t2/PFD3qFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ofsNxonnGh+ZNLq9sVg+W45jcElVfKOU8PyKcN3swoc=; b=m2MyQ2sXatqPGxGZQp3E7dr1AgPzYu0R9X2vZFcAeaJau4l3aaw6H6kyoa/kl2ZKAgqZItjfrZgFWT9gF1J+kjI3mqVqSXrTbN2QXzRbaNC6Ymqu26eEN7RKBGMxTyMqyrfy0G7sb/MXnhpKlKci/3hB+SVolWI2Yv5silV4ArrJF07dq8FVIxwuQPFCBVX1OoP2O4MM33GhitRgSJC3cOHTnaQQlc0DL1DdQgmj0FD3lErSO4XetueoRBSodnj14ciZuTS5EXAXUkGJyo7FqkwmKWhDheYfxvkZDKOfqWdAcz7eCjmuAv6IoIzV+ZnsQddlRrJc4kL4FID2NtOOug== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ofsNxonnGh+ZNLq9sVg+W45jcElVfKOU8PyKcN3swoc=; b=roWfJ62TBWs4UrXRWsZMXh/lRIfS+nCb6/H3rOXhtBOYNWz1IU+OL9GKXIpLFHp52g47n/6XzAVa1HAc5zqIJxZJBWoAjB8Qdzed+c0Kgcc/EqxaLGPTqeJGxazmooOrvxM3/28RcuaprDbblRSSECLxFJJ/S0vfW33E1kYtO7s= Received: from SJ0PR11MB5006.namprd11.prod.outlook.com (2603:10b6:a03:2db::22) by BY5PR11MB3909.namprd11.prod.outlook.com (2603:10b6:a03:191::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.24; Mon, 29 Mar 2021 12:29:30 +0000 Received: from SJ0PR11MB5006.namprd11.prod.outlook.com ([fe80::522:5b2:4210:a4b3]) by SJ0PR11MB5006.namprd11.prod.outlook.com ([fe80::522:5b2:4210:a4b3%7]) with mapi id 15.20.3977.033; Mon, 29 Mar 2021 12:29:30 +0000 From: "Jiang, Cheng1" To: "Liu, Yong" , "maxime.coquelin@redhat.com" , "Xia, Chenbo" CC: "dev@dpdk.org" , "Hu, Jiayu" , "Yang, YvonneX" , "Wang, Yinan" Thread-Topic: [dpdk-dev] [PATCH v2] vhost: add support for packed ring in async vhost Thread-Index: AQHXHuUs8q9jYE+1Oki8jBjDWuenc6qS36mAgAgJScA= Date: Mon, 29 Mar 2021 12:29:29 +0000 Message-ID: References: <20210317085426.10119-1-Cheng1.jiang@intel.com> <20210322061517.19013-1-Cheng1.jiang@intel.com> <2c90bd07cdbb49b39635a1301c48bd7a@intel.com> In-Reply-To: <2c90bd07cdbb49b39635a1301c48bd7a@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.102.204.51] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: b6b1a4f3-ec60-485a-d925-08d8f2ae4cf9 x-ms-traffictypediagnostic: BY5PR11MB3909: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8882; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: quHe2cwfPthO9HvqbDV9BdtPhT0iBxFByKBlxiHeVYSnxj9cnNBCDpOExRddBMZNsPAr1FrHDfrpYkbRYB31XcPZyu9pC8++HYCfV9LYpOJEz5RKrSXZE4FD6p9eN5Q+u6ykPuIKqmOLXMYutnGZelAn+lJRGp1lkElVi72zzRYkMzDuadMDqORsqnCvuL+vX6zHN9v34AwzXTvhnYGu2HyqYqwmgUNdRTSM2jetFOXg2g3GcmwCQsHeBGl7OK+fVOXebfQ65SIrQXi0ugl+fZly9a2DzwnkVllQkAUUThnOUEmqCldYMVQlSjaNnz3u3oeGzF9kxMTGSpeq4wp6a+jgdIiHYwIkphyUM9uOJ7fmawYpxpGeAxa2w4wN6ryHfFRCkOsJ1YHmW0ns3A1GwVTwTF1geGcqVnAcphmoznldkhSP1/j/TQTr4frb0Q4hVSRkqLrIMQ4AUj1U8eZs+PoVGCIZZTsav2jmPnXiPAN1ZOGfjeWZ3ipGRkhjhqSZfaJ1C3giWd5W4K/BZAyJk4oCW2yr42Yo3LTGf6W7iQDHGkEiESlBDvUPcw08SZ83Y23EpfeO6tlE6qe0YXbbFxvk/jI+fLf5ibaccfuv+vrMUPfeq71l+ziaHP2HpUJgZIRuxcManxkLtFnlzZRaz78gb00KOshQ4Wh/+0siEjs= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR11MB5006.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(396003)(366004)(136003)(346002)(39860400002)(376002)(52536014)(38100700001)(86362001)(26005)(54906003)(83380400001)(107886003)(6506007)(4326008)(7696005)(5660300002)(8676002)(33656002)(8936002)(186003)(478600001)(6636002)(110136005)(66556008)(55016002)(76116006)(66946007)(53546011)(316002)(30864003)(2906002)(64756008)(66446008)(71200400001)(9686003)(66476007)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?Vc/TT7l4mQyQdqmKIjtNJzDRex80Y0YUs4kk+2QD8kcL2mmfNmYc8hmrXu3Y?= =?us-ascii?Q?/wBOz/TcLPeeYNWOINJsfRvflUp2QCfMxpbEI3Ij1ZtzKQfr+NMRqQJZBahA?= =?us-ascii?Q?LiH4g59vLqFfQMrnIU5HPEPKhs6Rd8XLugxyPZX2E+jFYO7YS83+3pPWbWfh?= =?us-ascii?Q?rcq5YUW5UIx+m1pwGO04YGKNi+5aTG6TiriA2RlKPbt+CJwinp6b5Wrxdaa3?= =?us-ascii?Q?58mdBYaZKwx6QL0gMTvbDRRSxiZ7jrKTdnG28ESM1KTIrVbCjKV3OQ9Audu1?= =?us-ascii?Q?H8aVl4eg90nUHn2ysH2ZGEcBTXPOJO9fonqLVITBqSyyEOQ7DAZd29ujiCjt?= =?us-ascii?Q?K3LvX+CKOkI/wObiA1K4O2aAfD2m13e2jGjBxDwXqH1GDGboVjLdOG3dAsvc?= =?us-ascii?Q?VhPYErXREOb8FjRaLyURRhvBMTuO17SBofhC1My44a7iDekEOwK5H7HYsDIT?= =?us-ascii?Q?nCSFxwMm5VA3kpNI+akPE3uM3Z2VyGcdCpUHJVXARcnQq1djzKebV30uwRgB?= =?us-ascii?Q?LL6PMVJcfspyHiO2riAgJMaKi+KhijYUJ7XzIAPXCD5ils7+oUtTt0JibmOr?= =?us-ascii?Q?ejxorp6m9XTxkOALcWh2vxpqoIBINtVj2GnuTL9rg9KYiiECWK9jhxnW6ipY?= =?us-ascii?Q?3PczcmLC4zJNN4Sptp5DV8dGpb93afA6Rmk7fF9N24+/Y1JvGk2wpG/aLBYt?= =?us-ascii?Q?8bUMzCUzJj9ZtR5Vhek2iD6AwsD2dT76jA3JmSXICDT1hq9ovyN7Kwf1xMPf?= =?us-ascii?Q?Okc//QRztEzi/fFtzQ9Ikx5GUuxFe6/d2cpiElKNy+rFJjCu+/67GpUoMbxS?= =?us-ascii?Q?PB3X7qN+5M6UaRoUiXcNWDtyh+2EK9JGwzGBPFbpdNtYFw/CXDPjeg9tGqpT?= =?us-ascii?Q?kbpOIa4iyG7jRuRRUgGBTnnu90vkd1067CAGg0YYfYiBHeDy60unvvOqN+vb?= =?us-ascii?Q?sqT9WX8JC7qQ8xoBKowWmz9Xqfl1ZXrfmKBUJEIgSyzQP4+R1FUib5CsbE5a?= =?us-ascii?Q?I0oqGNmhf6harJ8pNv2etwvlIgOhiyiH75BZPZJtBCAei9BxT6rCTPDjN/Y8?= =?us-ascii?Q?v1lrlclJfq4vKh4SGUNK6xerhlodptteXJfOuuBZ+1O0nfI+ScU9VrBCrbp/?= =?us-ascii?Q?x0KZNKY2r3H079et/h81EH7TNU2yuBmkyZiWTOZ9/qVZsbPrW+gPYgFSfTBS?= =?us-ascii?Q?YYFLqTS/S97K9bZS3h4F2IF7wDi4QfPJ9/DIsZnWYFSHEpdGZ0ImebMPzy07?= =?us-ascii?Q?HBuUTZcvSaDcpTk0zNTc9tLTy8An6Xxo7ai+W070gKo5CcO/YH+UkzzoNwoP?= =?us-ascii?Q?NO08ey8fUUixV1cMoGyzJEG7?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SJ0PR11MB5006.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: b6b1a4f3-ec60-485a-d925-08d8f2ae4cf9 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Mar 2021 12:29:30.0129 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 3dSPUk/fy9v/HQA5uoViUSaD4wsT0egOz+BHlfZJFy/+KjCmrKO669tMoPNsaUZRLrTyuNHx3MluggFPdeMbOg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR11MB3909 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v2] vhost: add support for packed ring in async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, > -----Original Message----- > From: Liu, Yong > Sent: Wednesday, March 24, 2021 5:19 PM > To: Jiang, Cheng1 ; maxime.coquelin@redhat.com; > Xia, Chenbo > Cc: dev@dpdk.org; Hu, Jiayu ; Yang, YvonneX > ; Wang, Yinan ; Jiang, > Cheng1 > Subject: RE: [dpdk-dev] [PATCH v2] vhost: add support for packed ring in > async vhost >=20 >=20 >=20 > > -----Original Message----- > > From: dev On Behalf Of Cheng Jiang > > Sent: Monday, March 22, 2021 2:15 PM > > To: maxime.coquelin@redhat.com; Xia, Chenbo > > Cc: dev@dpdk.org; Hu, Jiayu ; Yang, YvonneX > > ; Wang, Yinan ; Jiang, > > Cheng1 > > Subject: [dpdk-dev] [PATCH v2] vhost: add support for packed ring in > > async vhost > > > > For now async vhost data path only supports split ring structure. In > > order to make async vhost compatible with virtio 1.1 spec this patch > > enables packed ring in async vhost data path. > > > > Signed-off-by: Cheng Jiang > > --- > > v2: > > * fix wrong buffer index in rte_vhost_poll_enqueue_completed() > > * add async_buffers_packed memory free in vhost_free_async_mem() > > > > lib/librte_vhost/rte_vhost_async.h | 1 + > > lib/librte_vhost/vhost.c | 24 +- > > lib/librte_vhost/vhost.h | 7 +- > > lib/librte_vhost/virtio_net.c | 447 +++++++++++++++++++++++++++-- > > 4 files changed, 441 insertions(+), 38 deletions(-) > > > > diff --git a/lib/librte_vhost/rte_vhost_async.h > > b/lib/librte_vhost/rte_vhost_async.h > > index c855ff875..6faa31f5a 100644 > > --- a/lib/librte_vhost/rte_vhost_async.h > > +++ b/lib/librte_vhost/rte_vhost_async.h > > @@ -89,6 +89,7 @@ struct rte_vhost_async_channel_ops { struct > > async_inflight_info { struct rte_mbuf *mbuf; uint16_t descs; /* num > > of descs inflight */ > > +uint16_t nr_buffers; /* num of buffers inflight for packed ring */ > > }; > > > > /** > > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index > > 52ab93d1e..51b44d6f2 100644 > > --- a/lib/librte_vhost/vhost.c > > +++ b/lib/librte_vhost/vhost.c > > @@ -330,15 +330,20 @@ vhost_free_async_mem(struct vhost_virtqueue > > *vq) > > { > > if (vq->async_pkts_info) > > rte_free(vq->async_pkts_info); > > -if (vq->async_descs_split) > > +if (vq->async_buffers_packed) { > > +rte_free(vq->async_buffers_packed); > > +vq->async_buffers_packed =3D NULL; > > +} else { > > rte_free(vq->async_descs_split); > > +vq->async_descs_split =3D NULL; > > +} > > + > > if (vq->it_pool) > > rte_free(vq->it_pool); > > if (vq->vec_pool) > > rte_free(vq->vec_pool); > > > > vq->async_pkts_info =3D NULL; > > -vq->async_descs_split =3D NULL; > > vq->it_pool =3D NULL; > > vq->vec_pool =3D NULL; > > } > > @@ -1603,9 +1608,9 @@ int rte_vhost_async_channel_register(int vid, > > uint16_t queue_id, return -1; > > > > /* packed queue is not supported */ > > -if (unlikely(vq_is_packed(dev) || !f.async_inorder)) { > > +if (unlikely(!f.async_inorder)) { > > VHOST_LOG_CONFIG(ERR, > > -"async copy is not supported on packed queue or non-inorder mode " > > +"async copy is not supported on non-inorder mode " > > "(vid %d, qid: %d)\n", vid, queue_id); return -1; } @@ -1643,10 > > +1648,17 @@ int rte_vhost_async_channel_register(int vid, uint16_t > > queue_id, vq->vec_pool =3D rte_malloc_socket(NULL, > VHOST_MAX_ASYNC_VEC > > * sizeof(struct iovec), RTE_CACHE_LINE_SIZE, node); > > -vq->async_descs_split =3D rte_malloc_socket(NULL, > > +if (vq_is_packed(dev)) { > > +vq->async_buffers_packed =3D rte_malloc_socket(NULL, size * > > +vq->sizeof(struct vring_used_elem_packed), > > +RTE_CACHE_LINE_SIZE, node); > > +} else { > > +vq->async_descs_split =3D rte_malloc_socket(NULL, > > vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE, > > node); -if (!vq->async_descs_split || !vq->async_pkts_info || > > +} > > + > > +if (!vq->async_pkts_info || > > !vq->it_pool || !vq->vec_pool) { > > vhost_free_async_mem(vq); > > VHOST_LOG_CONFIG(ERR, > > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index > > 658f6fc28..d6324fbf8 100644 > > --- a/lib/librte_vhost/vhost.h > > +++ b/lib/librte_vhost/vhost.h > > @@ -206,9 +206,14 @@ struct vhost_virtqueue { uint16_tasync_pkts_idx; > > uint16_tasync_pkts_inflight_n; uint16_tasync_last_pkts_n; -struct > > vring_used_elem *async_descs_split; > > +union { > > +struct vring_used_elem *async_descs_split; struct > > +vring_used_elem_packed *async_buffers_packed; }; > > uint16_t async_desc_idx; > > +uint16_t async_packed_buffer_idx; > > uint16_t last_async_desc_idx; > > +uint16_t last_async_buffer_idx; > > > > /* vq async features */ > > boolasync_inorder; > > diff --git a/lib/librte_vhost/virtio_net.c > > b/lib/librte_vhost/virtio_net.c index 583bf379c..fa8c4f4fe 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -363,8 +363,7 @@ > > vhost_shadow_dequeue_single_packed_inorder(struct vhost_virtqueue > *vq, > > } > > > > static __rte_always_inline void > > -vhost_shadow_enqueue_single_packed(struct virtio_net *dev, > > - struct vhost_virtqueue *vq, > > +vhost_shadow_enqueue_packed(struct vhost_virtqueue *vq, > > uint32_t len[], > > uint16_t id[], > > uint16_t count[], > > @@ -382,6 +381,17 @@ vhost_shadow_enqueue_single_packed(struct > > virtio_net *dev, > > vq->shadow_aligned_idx +=3D count[i]; > > vq->shadow_used_idx++; > > } > > +} > > + > > +static __rte_always_inline void > > +vhost_shadow_enqueue_single_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + uint32_t len[], > > + uint16_t id[], > > + uint16_t count[], > > + uint16_t num_buffers) > > +{ > > +vhost_shadow_enqueue_packed(vq, len, id, count, num_buffers); > > > > if (vq->shadow_aligned_idx >=3D PACKED_BATCH_SIZE) { > > do_data_copy_enqueue(dev, vq); @@ -1633,12 +1643,343 @@ > > virtio_dev_rx_async_submit_split(struct > > virtio_net *dev, > > return pkt_idx; > > } > > > > +static __rte_always_inline int > > +vhost_enqueue_async_single_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct rte_mbuf *pkt, > > + struct buf_vector *buf_vec, > > + uint16_t *nr_descs, > > + uint16_t *nr_buffers, > > + struct iovec *src_iovec, struct iovec *dst_iovec, > > + struct rte_vhost_iov_iter *src_it, > > + struct rte_vhost_iov_iter *dst_it) { uint16_t nr_vec =3D 0; > > +uint16_t avail_idx =3D vq->last_avail_idx; uint16_t max_tries, tries = =3D > > +0; uint16_t buf_id =3D 0; uint32_t len =3D 0; uint16_t desc_count; > > +uint32_t size =3D pkt->pkt_len + sizeof(struct > > virtio_net_hdr_mrg_rxbuf); > > +uint32_t buffer_len[vq->size]; > > +uint16_t buffer_buf_id[vq->size]; > > +uint16_t buffer_desc_count[vq->size]; *nr_buffers =3D 0; > > + > > +if (rxvq_is_mergeable(dev)) > > +max_tries =3D vq->size - 1; > > +else > > +max_tries =3D 1; > > + > > +while (size > 0) { > > +/* > > + * if we tried all available ring items, and still > > + * can't get enough buf, it means something abnormal > > + * happened. > > + */ > > +if (unlikely(++tries > max_tries)) > > +return -1; > > + > > +if (unlikely(fill_vec_buf_packed(dev, vq, avail_idx, &desc_count, > > +buf_vec, &nr_vec, &buf_id, &len, > > +VHOST_ACCESS_RW) < 0)) > > +return -1; > > + > > +len =3D RTE_MIN(len, size); > > +size -=3D len; > > + > > +buffer_len[*nr_buffers] =3D len; > > +buffer_buf_id[*nr_buffers] =3D buf_id; > > +buffer_desc_count[*nr_buffers] =3D desc_count; *nr_buffers +=3D 1; > > + > > +*nr_descs +=3D desc_count; > > +avail_idx +=3D desc_count; > > +if (avail_idx >=3D vq->size) > > +avail_idx -=3D vq->size; > > +} > > + > > +if (async_mbuf_to_desc(dev, vq, pkt, buf_vec, nr_vec, *nr_buffers, > > +src_iovec, dst_iovec, src_it, dst_it) < 0) return -1; > > + > > +vhost_shadow_enqueue_packed(vq, buffer_len, buffer_buf_id, > > + buffer_desc_count, *nr_buffers); > > + > > +return 0; > > +} > > + > > +static __rte_always_inline int16_t > > +virtio_dev_rx_async_single_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct rte_mbuf *pkt, > > + uint16_t *nr_descs, uint16_t *nr_buffers, > > + struct iovec *src_iovec, struct iovec *dst_iovec, > > + struct rte_vhost_iov_iter *src_it, > > + struct rte_vhost_iov_iter *dst_it) { struct buf_vector > > +buf_vec[BUF_VECTOR_MAX]; *nr_descs =3D 0; *nr_buffers =3D 0; > > + > > +if (unlikely(vhost_enqueue_async_single_packed(dev, vq, pkt, > > buf_vec, > > + nr_descs, > > + nr_buffers, > > + src_iovec, dst_iovec, > > + src_it, dst_it) < 0)) { > > +VHOST_LOG_DATA(DEBUG, > > +"(%d) failed to get enough desc from vring\n", > > +dev->vid); > > +return -1; > > +} > > + > > +VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end > > index %d\n", > > +dev->vid, vq->last_avail_idx, > > +vq->last_avail_idx + *nr_descs); > > + > > +return 0; > > +} > > + > > +static __rte_noinline uint32_t > > +virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct > > +vhost_virtqueue *vq, uint16_t queue_id, struct rte_mbuf **pkts, > > +uint32_t count, struct rte_mbuf **comp_pkts, uint32_t *comp_count) { >=20 > Hi Cheng, > There're some common parts in virtio_dev_rx_async_submit_packed and > virtio_dev_rx_async_submit_split. > We can abstract some functions for those common parts which can bring > more clarity. Sure, but the structure or variable used by packed ring and split ring are = different, it may not be very suitable for abstraction, I will consider it = again, thank you. >=20 > Also this patch may be too huge for reviewing, please separate it into fe= w > parts for better understanding. I'll make it better in the next version. >=20 > Thanks, > Marvin >=20 > > +uint32_t pkt_idx =3D 0, pkt_burst_idx =3D 0; uint16_t num_buffers; > > +uint16_t num_desc; > > + > > +struct rte_vhost_iov_iter *it_pool =3D vq->it_pool; struct iovec > > +*vec_pool =3D vq->vec_pool; struct rte_vhost_async_desc > > +tdes[MAX_PKT_BURST]; struct iovec *src_iovec =3D vec_pool; struct iove= c > > +*dst_iovec =3D vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); struct > > +rte_vhost_iov_iter *src_it =3D it_pool; struct rte_vhost_iov_iter > > +*dst_it =3D it_pool + 1; uint16_t slot_idx =3D 0; uint16_t segs_await = =3D > > +0; struct async_inflight_info *pkts_info =3D vq->async_pkts_info; > > +uint32_t n_pkts =3D 0, pkt_err =3D 0; uint32_t num_async_pkts =3D 0, > > +num_done_pkts =3D 0; struct { uint16_t pkt_idx; uint16_t > > +last_avail_idx; } async_pkts_log[MAX_PKT_BURST]; > > + > > +rte_prefetch0(&vq->desc[vq->last_avail_idx & (vq->size - 1)]); > > + > > +for (pkt_idx =3D 0; pkt_idx < count; pkt_idx++) { if > > +(unlikely(virtio_dev_rx_async_single_packed(dev, vq, pkts[pkt_idx], > > +&num_desc, &num_buffers, src_iovec, dst_iovec, src_it, dst_it) < 0)) > > +{ break; } > > + > > +VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end > > index %d\n", > > +dev->vid, vq->last_avail_idx, > > +vq->last_avail_idx + num_desc); > > + > > +slot_idx =3D (vq->async_pkts_idx + num_async_pkts) & (vq->size - 1); i= f > > +(src_it->count) { uint16_t from, to; > > + > > +async_fill_desc(&tdes[pkt_burst_idx++], src_it, dst_it); > > +pkts_info[slot_idx].descs =3D num_desc; pkts_info[slot_idx].nr_buffers > > +=3D num_buffers; pkts_info[slot_idx].mbuf =3D pkts[pkt_idx]; > > +async_pkts_log[num_async_pkts].pkt_idx =3D pkt_idx; > > +async_pkts_log[num_async_pkts++].last_avail_idx =3D > > +vq->last_avail_idx; > > +src_iovec +=3D src_it->nr_segs; > > +dst_iovec +=3D dst_it->nr_segs; > > +src_it +=3D 2; > > +dst_it +=3D 2; > > +segs_await +=3D src_it->nr_segs; > > + > > +/** > > + * recover shadow used ring and keep DMA-occupied > > + * descriptors. > > + */ > > +from =3D vq->shadow_used_idx - num_buffers; to =3D > > +vq->async_packed_buffer_idx & (vq->size - 1); if (num_buffers + to <= =3D > > +vq->size) { rte_memcpy(&vq->async_buffers_packed[to], > > +&vq->shadow_used_packed[from], > > +num_buffers * > > +sizeof(struct > > vring_used_elem_packed)); > > +} else { > > +int size =3D vq->size - to; > > + > > +rte_memcpy(&vq->async_buffers_packed[to], > > +&vq->shadow_used_packed[from], > > +size * > > +sizeof(struct > > vring_used_elem_packed)); > > +rte_memcpy(vq->async_buffers_packed, > > +&vq->shadow_used_packed[from + > > +size], (num_buffers - size) * > > +sizeof(struct > > vring_used_elem_packed)); > > +} > > +vq->async_packed_buffer_idx +=3D num_buffers; shadow_used_idx -=3D > > +vq->num_buffers; > > +} else > > +comp_pkts[num_done_pkts++] =3D pkts[pkt_idx]; > > + > > +vq_inc_last_avail_packed(vq, num_desc); > > + > > +/* > > + * conditions to trigger async device transfer: > > + * - buffered packet number reaches transfer threshold > > + * - unused async iov number is less than max vhost vector */ if > > +(unlikely(pkt_burst_idx >=3D > > VHOST_ASYNC_BATCH_THRESHOLD || > > +((VHOST_MAX_ASYNC_VEC >> 1) - segs_await < > > +BUF_VECTOR_MAX))) { > > +n_pkts =3D vq->async_ops.transfer_data(dev->vid, > > +queue_id, tdes, 0, pkt_burst_idx); > > +src_iovec =3D vec_pool; > > +dst_iovec =3D vec_pool + (VHOST_MAX_ASYNC_VEC >> > > 1); > > +src_it =3D it_pool; > > +dst_it =3D it_pool + 1; > > +segs_await =3D 0; > > +vq->async_pkts_inflight_n +=3D n_pkts; > > + > > +if (unlikely(n_pkts < pkt_burst_idx)) { > > +/* > > + * log error packets number here and do > > actual > > + * error processing when applications poll > > + * completion > > + */ > > +pkt_err =3D pkt_burst_idx - n_pkts; > > +pkt_burst_idx =3D 0; > > +break; > > +} > > + > > +pkt_burst_idx =3D 0; > > +} > > +} > > + > > +if (pkt_burst_idx) { > > +n_pkts =3D vq->async_ops.transfer_data(dev->vid, > > +queue_id, tdes, 0, pkt_burst_idx); > > +vq->async_pkts_inflight_n +=3D n_pkts; > > + > > +if (unlikely(n_pkts < pkt_burst_idx)) pkt_err =3D pkt_burst_idx - > > +n_pkts; } > > + > > +do_data_copy_enqueue(dev, vq); > > + > > +if (unlikely(pkt_err)) { > > +uint16_t num_buffers =3D 0; > > + > > +num_async_pkts -=3D pkt_err; > > +/* calculate the sum of descriptors of DMA-error packets. */ while > > +(pkt_err-- > 0) { num_buffers +=3D pkts_info[slot_idx & (vq->size - > > +1)].nr_buffers; slot_idx--; } > > +vq->async_packed_buffer_idx -=3D num_buffers; > > +/* recover shadow used ring and available ring */ > > +vq->shadow_used_idx -=3D (vq->last_avail_idx - > > + > > async_pkts_log[num_async_pkts].last_avail_idx - > > +num_buffers); >=20 > Could it possible that vq->last_avail_idx smaller than > async_pkts_log[num_async_pkts].last_avail_idx when operations near the > ring's boundary? Yes, you are right. Will be fixed, thanks. Cheng >=20 > > +vq->last_avail_idx =3D > > +async_pkts_log[num_async_pkts].last_avail_idx; > > +pkt_idx =3D async_pkts_log[num_async_pkts].pkt_idx; > > +num_done_pkts =3D pkt_idx - num_async_pkts; } > > + > > +vq->async_pkts_idx +=3D num_async_pkts; > > +*comp_count =3D num_done_pkts; > > + > > +if (likely(vq->shadow_used_idx)) { > > +vhost_flush_enqueue_shadow_packed(dev, vq); > > +vhost_vring_call_packed(dev, vq); } > > + > > +return pkt_idx; > > +} > > + > > +static __rte_always_inline void > > +vhost_update_used_packed(struct virtio_net *dev, > > + struct vhost_virtqueue *vq, > > + struct vring_used_elem_packed > > *shadow_ring, > > + uint16_t count) > > +{ > > +if (count =3D=3D 0) > > +return; > > +int i; > > +uint16_t used_idx =3D vq->last_used_idx; uint16_t head_idx =3D > > +vq->last_used_idx; uint16_t head_flags =3D 0; > > + > > +/* Split loop in two to save memory barriers */ for (i =3D 0; i < > > +count; i++) { > > +vq->desc_packed[used_idx].id =3D shadow_ring[i].id; > > +vq->desc_packed[used_idx].len =3D shadow_ring[i].len; > > + > > +used_idx +=3D shadow_ring[i].count; > > +if (used_idx >=3D vq->size) > > +used_idx -=3D vq->size; > > +} > > + > > +/* The ordering for storing desc flags needs to be enforced. */ > > +rte_atomic_thread_fence(__ATOMIC_RELEASE); > > + > > +for (i =3D 0; i < count; i++) { > > +uint16_t flags; > > + > > +if (vq->shadow_used_packed[i].len) > > +flags =3D VRING_DESC_F_WRITE; > > +else > > +flags =3D 0; > > + > > +if (vq->used_wrap_counter) { > > +flags |=3D VRING_DESC_F_USED; > > +flags |=3D VRING_DESC_F_AVAIL; > > +} else { > > +flags &=3D ~VRING_DESC_F_USED; > > +flags &=3D ~VRING_DESC_F_AVAIL; > > +} > > + > > +if (i > 0) { > > +vq->desc_packed[vq->last_used_idx].flags =3D flags; > > + > > +vhost_log_cache_used_vring(dev, vq, > > +vq->last_used_idx * > > +sizeof(struct vring_packed_desc), > > +sizeof(struct vring_packed_desc)); > > +} else { > > +head_idx =3D vq->last_used_idx; > > +head_flags =3D flags; > > +} > > + > > +vq_inc_last_used_packed(vq, shadow_ring[i].count); } > > + > > +vq->desc_packed[head_idx].flags =3D head_flags; > > + > > +vhost_log_cache_used_vring(dev, vq, > > +head_idx * > > +sizeof(struct vring_packed_desc), > > +sizeof(struct vring_packed_desc)); > > + > > +vhost_log_cache_sync(dev, vq); > > +} > > + > > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, > > struct rte_mbuf **pkts, uint16_t count) { struct virtio_net *dev =3D > > get_device(vid); struct vhost_virtqueue *vq; -uint16_t n_pkts_cpl =3D > > 0, n_pkts_put =3D 0, n_descs =3D 0; > > +uint16_t n_pkts_cpl =3D 0, n_pkts_put =3D 0, n_descs =3D 0, n_buffers = =3D 0; > > uint16_t start_idx, pkts_idx, vq_size; struct async_inflight_info > > *pkts_info; uint16_t from, i; @@ -1680,53 +2021,96 @@ uint16_t > > rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, goto > > done; } > > > > -for (i =3D 0; i < n_pkts_put; i++) { > > -from =3D (start_idx + i) & (vq_size - 1); -n_descs +=3D > > pkts_info[from].descs; -pkts[i] =3D pkts_info[from].mbuf; > > +if (vq_is_packed(dev)) { > > +for (i =3D 0; i < n_pkts_put; i++) { > > +from =3D (start_idx + i) & (vq_size - 1); n_buffers +=3D > > +pkts_info[from].nr_buffers; pkts[i] =3D pkts_info[from].mbuf; } } else > > +{ for (i =3D 0; i < n_pkts_put; i++) { from =3D (start_idx + i) & > > +(vq_size - 1); n_descs +=3D pkts_info[from].descs; pkts[i] =3D > > +pkts_info[from].mbuf; } > > } > > + > > vq->async_last_pkts_n =3D n_pkts_cpl - n_pkts_put; > > vq->async_pkts_inflight_n -=3D n_pkts_put; > > > > if (likely(vq->enabled && vq->access_ok)) { -uint16_t nr_left =3D > > n_descs; uint16_t nr_copy; uint16_t to; > > > > /* write back completed descriptors to used ring */ -do { -from =3D > > vq->last_async_desc_idx & (vq->size - 1); -nr_copy =3D nr_left + from <= =3D > > vq->size ? nr_left : > > -vq->size - from; > > -to =3D vq->last_used_idx & (vq->size - 1); > > - > > -if (to + nr_copy <=3D vq->size) { > > -rte_memcpy(&vq->used->ring[to], > > +if (vq_is_packed(dev)) { > > +uint16_t nr_left =3D n_buffers; > > +uint16_t to; > > +do { > > +from =3D vq->last_async_buffer_idx & > > +(vq->size - 1); > > +to =3D (from + nr_left) & (vq->size - 1); > > + > > +if (to > from) { > > +vhost_update_used_packed(dev, vq, > > +vq->async_buffers_packed + > > from, > > +to - from); > > +vq->last_async_buffer_idx +=3D nr_left; > > +nr_left =3D 0; > > +} else { > > +vhost_update_used_packed(dev, vq, > > +vq->async_buffers_packed + > > from, > > +vq->size - from); > > +vq->last_async_buffer_idx +=3D > > +vq->size - > > from; > > +nr_left -=3D vq->size - from; > > +} > > +} while (nr_left > 0); > > +vhost_vring_call_packed(dev, vq); > > +} else { > > +uint16_t nr_left =3D n_descs; > > +do { > > +from =3D vq->last_async_desc_idx & (vq->size - > > 1); > > +nr_copy =3D nr_left + from <=3D vq->size ? nr_left : > > +vq->size - from; > > +to =3D vq->last_used_idx & (vq->size - 1); > > + > > +if (to + nr_copy <=3D vq->size) { > > +rte_memcpy(&vq->used->ring[to], > > &vq- > > >async_descs_split[from], > > nr_copy * > > sizeof(struct > > vring_used_elem)); > > -} else { > > -uint16_t size =3D vq->size - to; > > +} else { > > +uint16_t size =3D vq->size - to; > > > > -rte_memcpy(&vq->used->ring[to], > > +rte_memcpy(&vq->used->ring[to], > > &vq- > > >async_descs_split[from], > > size * > > sizeof(struct > > vring_used_elem)); > > -rte_memcpy(vq->used->ring, > > +rte_memcpy(vq->used->ring, > > &vq->async_descs_split[from > > + > > size], (nr_copy - size) * > > sizeof(struct > > vring_used_elem)); > > -} > > +} > > + > > +vq->last_async_desc_idx +=3D nr_copy; > > +vq->last_used_idx +=3D nr_copy; > > +nr_left -=3D nr_copy; > > +} while (nr_left > 0); > > + > > +__atomic_add_fetch(&vq->used->idx, n_descs, __ATOMIC_RELEASE); > > +vhost_vring_call_split(dev, vq); } > > > > -vq->last_async_desc_idx +=3D nr_copy; > > -vq->last_used_idx +=3D nr_copy; > > -nr_left -=3D nr_copy; > > -} while (nr_left > 0); > > > > -__atomic_add_fetch(&vq->used->idx, n_descs, __ATOMIC_RELEASE); > > -vhost_vring_call_split(dev, vq); -} else > > -vq->last_async_desc_idx +=3D n_descs; > > + > > +} else { > > +if (vq_is_packed(dev)) > > +vq->last_async_buffer_idx +=3D n_buffers; > > +else > > +vq->last_async_desc_idx +=3D n_descs; > > +} > > > > done: > > rte_spinlock_unlock(&vq->access_lock); > > @@ -1767,9 +2151,10 @@ virtio_dev_rx_async_submit(struct virtio_net > > *dev, uint16_t queue_id, if (count =3D=3D 0) goto out; > > > > -/* TODO: packed queue not implemented */ if (vq_is_packed(dev)) > > -nb_tx =3D 0; > > +nb_tx =3D virtio_dev_rx_async_submit_packed(dev, > > +vq, queue_id, pkts, count, comp_pkts, comp_count); > > else > > nb_tx =3D virtio_dev_rx_async_submit_split(dev, > > vq, queue_id, pkts, count, comp_pkts, > > -- > > 2.29.2 >=20