From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF258A0526; Wed, 22 Jul 2020 17:04:58 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D05F61BFEB; Wed, 22 Jul 2020 17:04:53 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C700F1BE90 for ; Wed, 22 Jul 2020 17:04:48 +0200 (CEST) IronPort-SDR: 69pETsPfWbrgV476gAkkI0gcgLwUBJBLKgXpOSYfAvfTdGJhe6CpcCvU8u6cy5EdpWMSgVE8UO u80IwPupGCmQ== X-IronPort-AV: E=McAfee;i="6000,8403,9689"; a="151666068" X-IronPort-AV: E=Sophos;i="5.75,383,1589266800"; d="scan'208";a="151666068" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jul 2020 08:04:01 -0700 IronPort-SDR: 6IwXnRrSxSEJC/0v2LnipkvNHLRkQ0zx8rBAHbhNSnnD4pScym8wbdgjTC8WHvVcyglBKza0eU c1tRnqO4QHpQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,383,1589266800"; d="scan'208";a="362737168" Received: from npg-dpdk-patrickfu-casc2.sh.intel.com ([10.67.119.92]) by orsmga001.jf.intel.com with ESMTP; 22 Jul 2020 08:04:00 -0700 From: patrick.fu@intel.com To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: Patrick Fu Date: Wed, 22 Jul 2020 23:01:52 +0800 Message-Id: <20200722150153.3422450-2-patrick.fu@intel.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20200722150153.3422450-1-patrick.fu@intel.com> References: <20200722105741.3421255-1-patrick.fu@intel.com> <20200722150153.3422450-1-patrick.fu@intel.com> Subject: [dpdk-dev] [PATCH v2 1/2] doc: update guides for vhost async APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Patrick Fu Update vhost guides to document vhost async APIs Signed-off-by: Patrick Fu --- doc/guides/prog_guide/vhost_lib.rst | 86 ++++++++++++++++++++++++++--- 1 file changed, 77 insertions(+), 9 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index db921f922..b892eec67 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -147,6 +147,21 @@ The following is an overview of some key Vhost API functions: It is disabled by default. + - ``RTE_VHOST_USER_ASYNC_COPY`` + + Asynchronous data path will be enabled when this flag is set. Async data + path allows applications to register async copy devices (typically + hardware DMA channels) to the vhost queues. Vhost leverages the copy + device registered to free CPU from memory copy operations. A set of + async data path APIs are defined for DPDK applications to make use of + the async capability. Only packets enqueued/dequeued by async APIs are + processed through the async data path. + + Currently this feature is only implemented on split ring enqueue data + path. + + It is disabled by default. + * ``rte_vhost_driver_set_features(path, features)`` This function sets the feature bits the vhost-user driver supports. The @@ -235,6 +250,59 @@ The following is an overview of some key Vhost API functions: Enable or disable zero copy feature of the vhost crypto backend. +* ``rte_vhost_async_channel_register(vid, queue_id, features, ops)`` + + Register a vhost queue with async copy device channel. + Following device ``features`` must be specified together with the + registration: + + * ``async_inorder`` + + Async copy device can guarantee the ordering of copy completion + sequence. Copies are completed in the same order with that at + the submission time. + + Currently, only ``async_inorder`` capable device is supported by vhost. + + * ``async_threshold`` + + The copy length (in bytes) below which CPU copy will be used even if + applications call async vhost APIs to enqueue/dequeue data. + + Typical value is 512~1024 depending on the async device capability. + + Applications must provide following ``ops`` callbacks for vhost lib to + work with the async copy devices: + + * ``transfer_data(vid, queue_id, descs, opaque_data, count)`` + + vhost invokes this function to submit copy data to the async devices. + For non-async_inorder capable devices, ``opaque_data`` could be used + for identifying the completed packets. + + * ``check_completed_copies(vid, queue_id, opaque_data, max_packets)`` + + vhost invokes this function to get the copy data completed by async + devices. + +* ``rte_vhost_async_channel_unregister(vid, queue_id)`` + + Unregister the async copy device channel from a vhost queue. + +* ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count)`` + + Submit an enqueue request to transmit ``count`` packets from host to guest + by async data path. Enqueue is not guaranteed to finish upon the return of + this API call. + + Applications must not free the packets submitted for enqueue until the + packets are completed. + +* ``rte_vhost_poll_enqueue_completed(vid, queue_id, pkts, count)`` + + Poll enqueue completion status from async data path. Completed packets + are returned to applications through ``pkts``. + Vhost-user Implementations -------------------------- @@ -294,16 +362,16 @@ Guest memory requirement * Memory pre-allocation - For non-zerocopy, guest memory pre-allocation is not a must. This can help - save of memory. If users really want the guest memory to be pre-allocated - (e.g., for performance reason), we can add option ``-mem-prealloc`` when - starting QEMU. Or, we can lock all memory at vhost side which will force - memory to be allocated when mmap at vhost side; option --mlockall in - ovs-dpdk is an example in hand. + For non-zerocopy non-async data path, guest memory pre-allocation is not a + must. This can help save of memory. If users really want the guest memory + to be pre-allocated (e.g., for performance reason), we can add option + ``-mem-prealloc`` when starting QEMU. Or, we can lock all memory at vhost + side which will force memory to be allocated when mmap at vhost side; + option --mlockall in ovs-dpdk is an example in hand. - For zerocopy, we force the VM memory to be pre-allocated at vhost lib when - mapping the guest memory; and also we need to lock the memory to prevent - pages being swapped out to disk. + For async and zerocopy data path, we force the VM memory to be + pre-allocated at vhost lib when mapping the guest memory; and also we need + to lock the memory to prevent pages being swapped out to disk. * Memory sharing -- 2.18.4