From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A03BA0A0C; Fri, 9 Jul 2021 05:15:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93CAA41671; Fri, 9 Jul 2021 05:15:26 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 360CF41661 for ; Fri, 9 Jul 2021 05:15:24 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10039"; a="209454928" X-IronPort-AV: E=Sophos;i="5.84,225,1620716400"; d="scan'208";a="209454928" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2021 20:15:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,225,1620716400"; d="scan'208";a="428606900" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.119.25]) by orsmga002.jf.intel.com with ESMTP; 08 Jul 2021 20:15:22 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, yinan.wang@intel.com, Jiayu Hu Date: Fri, 9 Jul 2021 05:43:09 -0400 Message-Id: <1625823790-91995-3-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1625823790-91995-1-git-send-email-jiayu.hu@intel.com> References: <1625656687-40604-2-git-send-email-jiayu.hu@intel.com> <1625823790-91995-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [PATCH v3 2/3] vhost: rework async configuration struct X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch reworks the async configuration structure to improve code readability. In addition, add preserved padding fields on the structure for future usage. Signed-off-by: Jiayu Hu --- doc/guides/prog_guide/vhost_lib.rst | 19 +++++++++++-------- examples/vhost/main.c | 8 ++++---- lib/vhost/rte_vhost_async.h | 35 +++++++++++++++++------------------ lib/vhost/vhost.c | 13 +++++-------- 4 files changed, 37 insertions(+), 38 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index d18fb98..affdc57 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -218,26 +218,29 @@ The following is an overview of some key Vhost API functions: Enable or disable zero copy feature of the vhost crypto backend. -* ``rte_vhost_async_channel_register(vid, queue_id, features, ops)`` +* ``rte_vhost_async_channel_register(vid, queue_id, config, ops)`` Register a vhost queue with async copy device channel after vring - is enabled. Following device ``features`` must be specified together + is enabled. Following device ``config`` must be specified together with the registration: - * ``async_inorder`` + * ``features`` - Async copy device can guarantee the ordering of copy completion - sequence. Copies are completed in the same order with that at - the submission time. + This field is used to specify async copy device features. - Currently, only ``async_inorder`` capable device is supported by vhost. + ``RTE_VHOST_ASYNC_INORDER`` represents the async copy device can + guarantee the order of copy completion is the same as the order + of copy submission. + + Currently, only ``RTE_VHOST_ASYNC_INORDER`` capable device is + supported by vhost. * ``async_threshold`` The copy length (in bytes) below which CPU copy will be used even if applications call async vhost APIs to enqueue/dequeue data. - Typical value is 512~1024 depending on the async device capability. + Typical value is 256~1024 depending on the async device capability. Applications must provide following ``ops`` callbacks for vhost lib to work with the async copy devices: diff --git a/examples/vhost/main.c b/examples/vhost/main.c index d2179ea..9cd855a 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1468,7 +1468,7 @@ new_device(int vid) vid, vdev->coreid); if (async_vhost_driver) { - struct rte_vhost_async_features f; + struct rte_vhost_async_config config = {0}; struct rte_vhost_async_channel_ops channel_ops; if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) { @@ -1476,11 +1476,11 @@ new_device(int vid) channel_ops.check_completed_copies = ioat_check_completed_copies_cb; - f.async_inorder = 1; - f.async_threshold = 256; + config.features = RTE_VHOST_ASYNC_INORDER; + config.async_threshold = 256; return rte_vhost_async_channel_register(vid, VIRTIO_RXQ, - f.intval, &channel_ops); + config, &channel_ops); } } diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 6faa31f..c93490d 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -93,18 +93,20 @@ struct async_inflight_info { }; /** - * dma channel feature bit definition + * dma channel features */ -struct rte_vhost_async_features { - union { - uint32_t intval; - struct { - uint32_t async_inorder:1; - uint32_t resvd_0:15; - uint32_t async_threshold:12; - uint32_t resvd_1:4; - }; - }; +enum { + RTE_VHOST_ASYNC_FEATURE_UNKNOWN = 0U, + RTE_VHOST_ASYNC_INORDER = 1U << 0, +}; + +/** + * dma channel configuration + */ +struct rte_vhost_async_config { + uint32_t async_threshold; + uint32_t features; + uint32_t resvd[2]; }; /** @@ -114,12 +116,8 @@ struct rte_vhost_async_features { * vhost device id async channel to be attached to * @param queue_id * vhost queue id async channel to be attached to - * @param features - * DMA channel feature bit - * b0 : DMA supports inorder data transfer - * b1 - b15: reserved - * b16 - b27: Packet length threshold for DMA transfer - * b28 - b31: reserved + * @param config + * DMA channel configuration structure * @param ops * DMA operation callbacks * @return @@ -127,7 +125,8 @@ struct rte_vhost_async_features { */ __rte_experimental int rte_vhost_async_channel_register(int vid, uint16_t queue_id, - uint32_t features, struct rte_vhost_async_channel_ops *ops); + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops); /** * unregister a dma channel for vhost diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 53a470f..a20f05a 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -1620,18 +1620,15 @@ int rte_vhost_extern_callback_register(int vid, } int rte_vhost_async_channel_register(int vid, uint16_t queue_id, - uint32_t features, - struct rte_vhost_async_channel_ops *ops) + struct rte_vhost_async_config config, + struct rte_vhost_async_channel_ops *ops) { struct vhost_virtqueue *vq; struct virtio_net *dev = get_device(vid); - struct rte_vhost_async_features f; if (dev == NULL || ops == NULL) return -1; - f.intval = features; - if (queue_id >= VHOST_MAX_VRING) return -1; @@ -1640,7 +1637,7 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, if (unlikely(vq == NULL || !dev->async_copy)) return -1; - if (unlikely(!f.async_inorder)) { + if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) { VHOST_LOG_CONFIG(ERR, "async copy is not supported on non-inorder mode " "(vid %d, qid: %d)\n", vid, queue_id); @@ -1720,8 +1717,8 @@ int rte_vhost_async_channel_register(int vid, uint16_t queue_id, vq->async_ops.check_completed_copies = ops->check_completed_copies; vq->async_ops.transfer_data = ops->transfer_data; - vq->async_inorder = f.async_inorder; - vq->async_threshold = f.async_threshold; + vq->async_inorder = true; + vq->async_threshold = config.async_threshold; vq->async_registered = true; -- 2.7.4