From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28F14A04F0; Wed, 18 Dec 2019 03:38:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 746CA1BE9E; Wed, 18 Dec 2019 03:38:41 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 4DAFC330; Wed, 18 Dec 2019 03:38:39 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Dec 2019 18:38:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,327,1571727600"; d="scan'208";a="298248640" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by orsmga001.jf.intel.com with ESMTP; 17 Dec 2019 18:38:37 -0800 Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 17 Dec 2019 18:38:37 -0800 Received: from shsmsx106.ccr.corp.intel.com (10.239.4.159) by fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 17 Dec 2019 18:38:37 -0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.109]) by SHSMSX106.ccr.corp.intel.com ([169.254.10.236]) with mapi id 14.03.0439.000; Wed, 18 Dec 2019 10:38:35 +0800 From: "Ding, Xuan" To: "Ye, Xiaolong" CC: "maxime.coquelin@redhat.com" , "Bie, Tiwei" , "Wang, Zhihong" , "Liu, Yong" , "dev@dpdk.org" , "stable@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v3] net/virtio-user: fix packed ring server mode Thread-Index: AQHVtUpSdZTkEFIvdE6HuPGPyqzHF6e+pAGAgACJKTA= Date: Wed, 18 Dec 2019 02:38:34 +0000 Message-ID: <3DA54CD954B3144AB0AD8E35A260C006052636D3@shsmsx102.ccr.corp.intel.com> References: <20191209164939.54806-1-xuan.ding@intel.com> <20191218022406.86245-1-xuan.ding@intel.com> <20191218022501.GP59123@intel.com> In-Reply-To: <20191218022501.GP59123@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3] net/virtio-user: fix packed ring server mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Ye, Xiaolong > Sent: Wednesday, December 18, 2019 10:25 AM > To: Ding, Xuan > Cc: maxime.coquelin@redhat.com; Bie, Tiwei ; Wang, > Zhihong ; Liu, Yong ; > dev@dpdk.org; stable@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v3] net/virtio-user: fix packed ring serve= r mode >=20 > Hi, Xuan >=20 > On 12/18, Xuan Ding wrote: > >This patch fixes the situation where datapath does not work properly > >when vhost reconnects to virtio in server mode with packed ring. > > > >Currently, virtio and vhost share memory of vring. For split ring, > >vhost can read the status of discriptors directly from the available > >ring and the used ring during reconnection. Therefore, the datapath can > continue. > > > >But for packed ring, when reconnecting to virtio, vhost cannot get the > >status of discriptors only through the descriptor ring. By resetting Tx > >and Rx queues, the datapath can restart from the beginning. > > > >Fixes: 4c3f5822eb214 ("net/virtio: add packed virtqueue defines") > >Cc: stable@dpdk.org > > > >Signed-off-by: Xuan Ding > > > >v3: > >* Removed an extra asterisk from a comment. > >* Renamed device reset function and moved it to virtio_user_ethdev.c. > > > >v2: > >* Renamed queue reset functions and moved them to virtqueue.c. >=20 > Please put these change log after below '---' marker, then they won't be = shown > in commit log when you apply the patch by `git am`. >=20 > Thanks, > Xiaolong Hi, Xiaolong, Thank you for your advice, I will change it in the next version. Best regards, Xuan >=20 > >--- > > drivers/net/virtio/virtio_ethdev.c | 4 +- > > drivers/net/virtio/virtio_user_ethdev.c | 40 ++++++++++++++ > > drivers/net/virtio/virtqueue.c | 71 +++++++++++++++++++++++++ > > drivers/net/virtio/virtqueue.h | 4 ++ > > 4 files changed, 117 insertions(+), 2 deletions(-) > > > >diff --git a/drivers/net/virtio/virtio_ethdev.c > >b/drivers/net/virtio/virtio_ethdev.c > >index 044eb10a7..f9d0ea70d 100644 > >--- a/drivers/net/virtio/virtio_ethdev.c > >+++ b/drivers/net/virtio/virtio_ethdev.c > >@@ -1913,6 +1913,8 @@ eth_virtio_dev_init(struct rte_eth_dev *eth_dev) > > goto err_vtpci_init; > > } > > > >+ rte_spinlock_init(&hw->state_lock); > >+ > > /* reset device and negotiate default features */ > > ret =3D virtio_init_device(eth_dev, > VIRTIO_PMD_DEFAULT_GUEST_FEATURES); > > if (ret < 0) > >@@ -2155,8 +2157,6 @@ virtio_dev_configure(struct rte_eth_dev *dev) > > return -EBUSY; > > } > > > >- rte_spinlock_init(&hw->state_lock); > >- > > hw->use_simple_rx =3D 1; > > > > if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { diff --git > >a/drivers/net/virtio/virtio_user_ethdev.c > >b/drivers/net/virtio/virtio_user_ethdev.c > >index 3fc172573..425f48230 100644 > >--- a/drivers/net/virtio/virtio_user_ethdev.c > >+++ b/drivers/net/virtio/virtio_user_ethdev.c > >@@ -25,12 +25,48 @@ > > #define virtio_user_get_dev(hw) \ > > ((struct virtio_user_dev *)(hw)->virtio_user_dev) > > > >+static void > >+virtio_user_reset_queues_packed(struct rte_eth_dev *dev) { > >+ struct virtio_hw *hw =3D dev->data->dev_private; > >+ struct virtnet_rx *rxvq; > >+ struct virtnet_tx *txvq; > >+ uint16_t i; > >+ > >+ /* Add lock to avoid queue contention. */ > >+ rte_spinlock_lock(&hw->state_lock); > >+ hw->started =3D 0; > >+ > >+ /* > >+ * Waitting for datapath to complete before resetting queues. > >+ * 1 ms should be enough for the ongoing Tx/Rx function to finish. > >+ */ > >+ rte_delay_ms(1); > >+ > >+ /* Vring reset for each Tx queue and Rx queue. */ > >+ for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > >+ rxvq =3D dev->data->rx_queues[i]; > >+ virtqueue_rxvq_reset_packed(rxvq->vq); > >+ virtio_dev_rx_queue_setup_finish(dev, i); > >+ } > >+ > >+ for (i =3D 0; i < dev->data->nb_tx_queues; i++) { > >+ txvq =3D dev->data->tx_queues[i]; > >+ virtqueue_txvq_reset_packed(txvq->vq); > >+ } > >+ > >+ hw->started =3D 1; > >+ rte_spinlock_unlock(&hw->state_lock); > >+} > >+ > >+ > > static int > > virtio_user_server_reconnect(struct virtio_user_dev *dev) { > > int ret; > > int connectfd; > > struct rte_eth_dev *eth_dev =3D &rte_eth_devices[dev->port_id]; > >+ struct virtio_hw *hw =3D eth_dev->data->dev_private; > > > > connectfd =3D accept(dev->listenfd, NULL, NULL); > > if (connectfd < 0) > >@@ -51,6 +87,10 @@ virtio_user_server_reconnect(struct virtio_user_dev > >*dev) > > > > dev->features &=3D dev->device_features; > > > >+ /* For packed ring, resetting queues is required in reconnection. */ > >+ if (vtpci_packed_queue(hw)) > >+ virtio_user_reset_queues_packed(eth_dev); > >+ > > ret =3D virtio_user_start_device(dev); > > if (ret < 0) > > return -1; > >diff --git a/drivers/net/virtio/virtqueue.c > >b/drivers/net/virtio/virtqueue.c index 5ff1e3587..0b4e3bf3e 100644 > >--- a/drivers/net/virtio/virtqueue.c > >+++ b/drivers/net/virtio/virtqueue.c > >@@ -141,3 +141,74 @@ virtqueue_rxvq_flush(struct virtqueue *vq) > > else > > virtqueue_rxvq_flush_split(vq); > > } > >+ > >+int > >+virtqueue_rxvq_reset_packed(struct virtqueue *vq) { > >+ int size =3D vq->vq_nentries; > >+ struct vq_desc_extra *dxp; > >+ struct virtnet_rx *rxvq; > >+ uint16_t desc_idx; > >+ > >+ vq->vq_used_cons_idx =3D 0; > >+ vq->vq_desc_head_idx =3D 0; > >+ vq->vq_avail_idx =3D 0; > >+ vq->vq_desc_tail_idx =3D (uint16_t)(vq->vq_nentries - 1); > >+ vq->vq_free_cnt =3D vq->vq_nentries; > >+ > >+ vq->vq_packed.used_wrap_counter =3D 1; > >+ vq->vq_packed.cached_flags =3D VRING_PACKED_DESC_F_AVAIL; > >+ vq->vq_packed.event_flags_shadow =3D 0; > >+ vq->vq_packed.cached_flags |=3D VRING_DESC_F_WRITE; > >+ > >+ rxvq =3D &vq->rxq; > >+ memset(rxvq->mz->addr, 0, rxvq->mz->len); > >+ > >+ for (desc_idx =3D 0; desc_idx < vq->vq_nentries; desc_idx++) { > >+ dxp =3D &vq->vq_descx[desc_idx]; > >+ if (dxp->cookie !=3D NULL) { > >+ rte_pktmbuf_free(dxp->cookie); > >+ dxp->cookie =3D NULL; > >+ } > >+ } > >+ > >+ vring_desc_init_packed(vq, size); > >+ > >+ return 0; > >+} > >+ > >+int > >+virtqueue_txvq_reset_packed(struct virtqueue *vq) { > >+ int size =3D vq->vq_nentries; > >+ struct vq_desc_extra *dxp; > >+ struct virtnet_tx *txvq; > >+ uint16_t desc_idx; > >+ > >+ vq->vq_used_cons_idx =3D 0; > >+ vq->vq_desc_head_idx =3D 0; > >+ vq->vq_avail_idx =3D 0; > >+ vq->vq_desc_tail_idx =3D (uint16_t)(vq->vq_nentries - 1); > >+ vq->vq_free_cnt =3D vq->vq_nentries; > >+ > >+ vq->vq_packed.used_wrap_counter =3D 1; > >+ vq->vq_packed.cached_flags =3D VRING_PACKED_DESC_F_AVAIL; > >+ vq->vq_packed.event_flags_shadow =3D 0; > >+ > >+ txvq =3D &vq->txq; > >+ memset(txvq->mz->addr, 0, txvq->mz->len); > >+ memset(txvq->virtio_net_hdr_mz->addr, 0, > >+ txvq->virtio_net_hdr_mz->len); > >+ > >+ for (desc_idx =3D 0; desc_idx < vq->vq_nentries; desc_idx++) { > >+ dxp =3D &vq->vq_descx[desc_idx]; > >+ if (dxp->cookie !=3D NULL) { > >+ rte_pktmbuf_free(dxp->cookie); > >+ dxp->cookie =3D NULL; > >+ } > >+ } > >+ > >+ vring_desc_init_packed(vq, size); > >+ > >+ return 0; > >+} > >diff --git a/drivers/net/virtio/virtqueue.h > >b/drivers/net/virtio/virtqueue.h index 8d7f197b1..58ad7309a 100644 > >--- a/drivers/net/virtio/virtqueue.h > >+++ b/drivers/net/virtio/virtqueue.h > >@@ -443,6 +443,10 @@ struct rte_mbuf *virtqueue_detach_unused(struct > >virtqueue *vq); > > /* Flush the elements in the used ring. */ void > >virtqueue_rxvq_flush(struct virtqueue *vq); > > > >+int virtqueue_rxvq_reset_packed(struct virtqueue *vq); > >+ > >+int virtqueue_txvq_reset_packed(struct virtqueue *vq); > >+ > > static inline int > > virtqueue_full(const struct virtqueue *vq) { > >-- > >2.17.1 > >