From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E087BA0577; Mon, 6 Apr 2020 10:57:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E01241BEDD; Mon, 6 Apr 2020 10:57:03 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id CAC7D1BEDC for ; Mon, 6 Apr 2020 10:57:02 +0200 (CEST) IronPort-SDR: LCwY5uf4VKNiSePeKbnlQhnwNY3SQxagCdCkQdRcF0bF5LPj6QBgxWoWQceRrMxEBDKP6PGsh/ IGTjviT5U5zg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2020 01:57:01 -0700 IronPort-SDR: kXNHAXV1hHebRuRv8vBok1aot/p4xPmAqyKe6PUZCTfv/Z5oy5UAn8K3OT+TgFFrSTQG4F2cCc Jg/IJh20C6CA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,350,1580803200"; d="scan'208";a="450762025" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by fmsmga005.fm.intel.com with ESMTP; 06 Apr 2020 01:57:01 -0700 Received: from shsmsx604.ccr.corp.intel.com (10.109.6.214) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 6 Apr 2020 01:57:01 -0700 Received: from shsmsx603.ccr.corp.intel.com (10.109.6.143) by SHSMSX604.ccr.corp.intel.com (10.109.6.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Mon, 6 Apr 2020 16:56:57 +0800 Received: from shsmsx603.ccr.corp.intel.com ([10.109.6.143]) by SHSMSX603.ccr.corp.intel.com ([10.109.6.143]) with mapi id 15.01.1713.004; Mon, 6 Apr 2020 16:56:57 +0800 From: "Wang, Yinan" To: "Liu, Yong" , "maxime.coquelin@redhat.com" , "Ye, Xiaolong" , "Wang, Zhihong" , "eperezma@redhat.com" CC: "dev@dpdk.org" , "Liu, Yong" Thread-Topic: [dpdk-dev] [PATCH] vhost: remove deferred shadow update Thread-Index: AQHWCCz9Wkr3W7UHq0SI0kcTum/oVKhr0g1A Date: Mon, 6 Apr 2020 08:56:57 +0000 Message-ID: <7ed0ab15a1be40b4955ea380e94aae5f@intel.com> References: <20200401212926.74989-1-yong.liu@intel.com> In-Reply-To: <20200401212926.74989-1-yong.liu@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-reaction: no-action dlp-version: 11.2.0.6 dlp-product: dlpe-windows x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] vhost: remove deferred shadow update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tested-by: Wang, Yinan > -----Original Message----- > From: dev On Behalf Of Marvin Liu > Sent: 2020=1B$BG/=1B(B4=1B$B7n=1B(B2=1B$BF|=1B(B 5:29 > To: maxime.coquelin@redhat.com; Ye, Xiaolong ; > Wang, Zhihong ; eperezma@redhat.com > Cc: dev@dpdk.org; Liu, Yong > Subject: [dpdk-dev] [PATCH] vhost: remove deferred shadow update >=20 > Defer shadow ring update will help overall throughput when frontend much > slower than backend. But that is not all the cases we faced now. > In case like ovs-dpdk + dpdk virtio user, frontend will much faster than > backend. Frontend may not be able to collect available descs when shadow > update is deferred. Thus will harm RFC2544 performance. >=20 > Solution is just remove deferred shadow update, which will help RFC2544 > and fix potential issue with virtio net driver. >=20 > Signed-off-by: Marvin Liu >=20 > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.= c index > 37c47c7dc..2ba0575a7 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -382,25 +382,6 @@ vhost_shadow_enqueue_single_packed(struct > virtio_net *dev, > } > } >=20 > -static __rte_always_inline void > -vhost_flush_dequeue_packed(struct virtio_net *dev, > - struct vhost_virtqueue *vq) > -{ > - int shadow_count; > - if (!vq->shadow_used_idx) > - return; > - > - shadow_count =3D vq->last_used_idx - vq->shadow_last_used_idx; > - if (shadow_count <=3D 0) > - shadow_count +=3D vq->size; > - > - if ((uint32_t)shadow_count >=3D (vq->size - MAX_PKT_BURST)) { > - do_data_copy_dequeue(vq); > - vhost_flush_dequeue_shadow_packed(dev, vq); > - vhost_vring_call_packed(dev, vq); > - } > -} > - > /* avoid write operation when necessary, to lessen cache issues */ > #define ASSIGN_UNLESS_EQUAL(var, val) do { \ > if ((var) !=3D (val)) \ > @@ -2133,20 +2114,6 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net > *dev, > return pkt_idx; > } >=20 > -static __rte_always_inline bool > -next_desc_is_avail(const struct vhost_virtqueue *vq) -{ > - bool wrap_counter =3D vq->avail_wrap_counter; > - uint16_t next_used_idx =3D vq->last_used_idx + 1; > - > - if (next_used_idx >=3D vq->size) { > - next_used_idx -=3D vq->size; > - wrap_counter ^=3D 1; > - } > - > - return desc_is_avail(&vq->desc_packed[next_used_idx], > wrap_counter); > -} > - > static __rte_noinline uint16_t > virtio_dev_tx_packed(struct virtio_net *dev, > struct vhost_virtqueue *vq, > @@ -2163,7 +2130,6 @@ virtio_dev_tx_packed(struct virtio_net *dev, > if (remained >=3D PACKED_BATCH_SIZE) { > if (!virtio_dev_tx_batch_packed(dev, vq, mbuf_pool, > &pkts[pkt_idx])) { > - vhost_flush_dequeue_packed(dev, vq); > pkt_idx +=3D PACKED_BATCH_SIZE; > remained -=3D PACKED_BATCH_SIZE; > continue; > @@ -2173,7 +2139,6 @@ virtio_dev_tx_packed(struct virtio_net *dev, > if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool, > &pkts[pkt_idx])) > break; > - vhost_flush_dequeue_packed(dev, vq); > pkt_idx++; > remained--; >=20 > @@ -2182,15 +2147,8 @@ virtio_dev_tx_packed(struct virtio_net *dev, > if (vq->shadow_used_idx) { > do_data_copy_dequeue(vq); >=20 > - if (remained && !next_desc_is_avail(vq)) { > - /* > - * The guest may be waiting to TX some buffers to > - * enqueue more to avoid bufferfloat, so we try to > - * reduce latency here. > - */ > - vhost_flush_dequeue_shadow_packed(dev, vq); > - vhost_vring_call_packed(dev, vq); > - } > + vhost_flush_dequeue_shadow_packed(dev, vq); > + vhost_vring_call_packed(dev, vq); > } >=20 > return pkt_idx; > -- > 2.17.1