From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D3AFA04DB; Thu, 15 Oct 2020 17:35:33 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5F3011EADA; Thu, 15 Oct 2020 17:35:32 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id 35BA71EAD9 for ; Thu, 15 Oct 2020 17:35:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1602776128; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a7ElmsnXDKw77iGIbjJp8Lft0ioH8AK7LOMaNvDKm7g=; b=aWjqMxC52nu8SstSwiOz+2r/AfFubh5XN/3TV454R7Ic2Y3pVp2w4MShw0nUG9MESp7wqH kvEUZphIYWx46XdJlyKU6TtcRpFJhQ4SBLx+XTlIeozz79WWfAgDGxNUblHxACxvtUKFAV Ev5lapB9k09bkGypPQiqY4oy2PvTBVY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-184-yDMkAeabOOy4Ybbq77QZeA-1; Thu, 15 Oct 2020 11:35:24 -0400 X-MC-Unique: yDMkAeabOOy4Ybbq77QZeA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5F7E2101962B; Thu, 15 Oct 2020 15:35:23 +0000 (UTC) Received: from [10.36.110.38] (unknown [10.36.110.38]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E2CB073669; Thu, 15 Oct 2020 15:35:21 +0000 (UTC) To: "Liu, Yong" , "Xia, Chenbo" , "Wang, Zhihong" Cc: "dev@dpdk.org" References: <20200819032414.51430-2-yong.liu@intel.com> <20201009081410.63944-1-yong.liu@intel.com> <7d902e14-1e4f-31d9-c7f6-7a57e00186ab@redhat.com> <6c634a226f7e48cbbca3d9091f68eea0@intel.com> From: Maxime Coquelin Message-ID: <2c9a8c71-eaa3-f405-c315-300c6234e56e@redhat.com> Date: Thu, 15 Oct 2020 17:35:19 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1 MIME-Version: 1.0 In-Reply-To: <6c634a226f7e48cbbca3d9091f68eea0@intel.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost add vectorized data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Marvin, On 10/15/20 5:28 PM, Liu, Yong wrote: > Hi All, > Performance gain from vectorized datapath in OVS-DPDK is around 1%, meanwhile it have a small impact of original datapath. > On the other hand, it will increase the complexity of vhost (new parameter introduced, prepare memory information for address translation). > After weighed the procs and co, I’d like to drawback this patch set. Thanks for your time. Thanks for running the test with the new version. I have removed it from Patchwork. Thanks, Maxime > Regards, > Marvin > >> -----Original Message----- >> From: Maxime Coquelin >> Sent: Monday, October 12, 2020 4:22 PM >> To: Liu, Yong ; Xia, Chenbo ; >> Wang, Zhihong >> Cc: dev@dpdk.org >> Subject: Re: [PATCH v3 0/5] vhost add vectorized data path >> >> Hi Marvin, >> >> On 10/9/20 10:14 AM, Marvin Liu wrote: >>> Packed ring format is imported since virtio spec 1.1. All descriptors >>> are compacted into one single ring when packed ring format is on. It is >>> straight forward that ring operations can be accelerated by utilizing >>> SIMD instructions. >>> >>> This patch set will introduce vectorized data path in vhost library. If >>> vectorized option is on, operations like descs check, descs writeback, >>> address translation will be accelerated by SIMD instructions. On skylake >>> server, it can bring 6% performance gain in loopback case and around 4% >>> performance gain in PvP case. >> >> IMHO, 4% gain on PVP is not a significant gain if we compare to the >> added complexity. Moreover, I guess this is 4% gain with testpmd-based >> PVP? If this is the case it may be even lower with OVS-DPDK PVP >> benchmark, I will try to do a benchmark this week. >> >> Thanks, >> Maxime >> >>> Vhost application can choose whether using vectorized acceleration, just >>> like external buffer feature. If platform or ring format not support >>> vectorized function, vhost will fallback to use default batch function. >>> There will be no impact in current data path. >>> >>> v3: >>> * rename vectorized datapath file >>> * eliminate the impact when avx512 disabled >>> * dynamically allocate memory regions structure >>> * remove unlikely hint for in_order >>> >>> v2: >>> * add vIOMMU support >>> * add dequeue offloading >>> * rebase code >>> >>> Marvin Liu (5): >>> vhost: add vectorized data path >>> vhost: reuse packed ring functions >>> vhost: prepare memory regions addresses >>> vhost: add packed ring vectorized dequeue >>> vhost: add packed ring vectorized enqueue >>> >>> doc/guides/nics/vhost.rst | 5 + >>> doc/guides/prog_guide/vhost_lib.rst | 12 + >>> drivers/net/vhost/rte_eth_vhost.c | 17 +- >>> lib/librte_vhost/meson.build | 16 ++ >>> lib/librte_vhost/rte_vhost.h | 1 + >>> lib/librte_vhost/socket.c | 5 + >>> lib/librte_vhost/vhost.c | 11 + >>> lib/librte_vhost/vhost.h | 239 +++++++++++++++++++ >>> lib/librte_vhost/vhost_user.c | 26 +++ >>> lib/librte_vhost/virtio_net.c | 258 ++++----------------- >>> lib/librte_vhost/virtio_net_avx.c | 344 ++++++++++++++++++++++++++++ >>> 11 files changed, 718 insertions(+), 216 deletions(-) >>> create mode 100644 lib/librte_vhost/virtio_net_avx.c >>> >