From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DECD6A0559; Tue, 17 Mar 2020 03:43:10 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AD0421C0AA; Tue, 17 Mar 2020 03:43:09 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 9E1F61C08C for ; Tue, 17 Mar 2020 03:43:07 +0100 (CET) IronPort-SDR: VqXhvxMP4YNH1+Lde78SNAyHLPpxJGAU7zw3QeyqMZg1ZNZ+gl7lnSStG6A+RuXf0WUOoumftB DKxfs2Yr2Slg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2020 19:43:06 -0700 IronPort-SDR: yl2fTkZ47a8Yn+hVIhfczgbUL70dVOayzK0EY3sYJoxOSn4zGdOCyD6eb+NYxaSAexQNpYwQoW znUuhRNVSELA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,562,1574150400"; d="scan'208";a="417390164" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.119.35]) by orsmga005.jf.intel.com with ESMTP; 16 Mar 2020 19:43:04 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com, Jiayu Hu Date: Tue, 17 Mar 2020 05:21:21 -0400 Message-Id: <1584436885-18651-1-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH 0/4] Support DMA-accelerated Tx operations for vhost-user PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In vhost-user PMD's Tx operations, where data movement is heavily involved, performing large memory copies usually takes up a major part of CPU cycles and becomes the hot spot. To offload expensive memory operations from the CPU, this patch set proposes to leverage DMA engines, e.g., I/OAT, a DMA engine in the Intel's processor, to accelerate large copies for vhost-user. Large copies are offloaded from the CPU to the DMA in an asynchronous manner. The CPU just submits copy jobs to the DMA but without waiting for its copy completion. Thus, there is no CPU intervention during data transfer; we can save precious CPU cycles and improve the overall throughput for vhost-user PMD based applications, like OVS. During packet transmission, it offloads large copies to the DMA and performs small copies by the CPU, due to startup overheads associated with the DMA. vhost-user PMD is able to support various DMA engines, but it just supports I/OAT devices currently. In addition, I/OAT acceleration is only enabled for Tx operations of split rings. Users can explicitly assign a I/OAT device to a queue by the parameter 'dmas'. However, one I/OAT device can only be used by one queue, and a queue can use one I/OAT device at a time. We measure the performance in testpmd. With 1024 bytes packets, compared with the original SW data path, DMA-enabled vhost-user PMD can improve the throughput around 20%~30% in the VM2VM and PVP cases. Furthermore, with larger packets, the throughput improvement will be higher. Jiayu Hu (4): vhost: populate guest memory for DMA-accelerated vhost-user net/vhost: setup vrings for DMA-accelerated datapath net/vhost: leverage DMA engines to accelerate Tx operations doc: add I/OAT acceleration support for vhost-user PMD doc/guides/nics/vhost.rst | 14 + drivers/Makefile | 2 +- drivers/net/vhost/Makefile | 6 +- drivers/net/vhost/internal.h | 160 +++++++ drivers/net/vhost/meson.build | 5 +- drivers/net/vhost/rte_eth_vhost.c | 308 +++++++++++--- drivers/net/vhost/virtio_net.c | 861 ++++++++++++++++++++++++++++++++++++++ drivers/net/vhost/virtio_net.h | 288 +++++++++++++ lib/librte_vhost/rte_vhost.h | 1 + lib/librte_vhost/socket.c | 20 + lib/librte_vhost/vhost.h | 2 + lib/librte_vhost/vhost_user.c | 3 +- 12 files changed, 1597 insertions(+), 73 deletions(-) create mode 100644 drivers/net/vhost/internal.h create mode 100644 drivers/net/vhost/virtio_net.c create mode 100644 drivers/net/vhost/virtio_net.h -- 2.7.4