From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 01CF35F44 for ; Mon, 3 Dec 2018 15:16:34 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 62E82308624B; Mon, 3 Dec 2018 14:16:33 +0000 (UTC) Received: from localhost (dhcp-192-205.str.redhat.com [10.33.192.205]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A3F9A62943; Mon, 3 Dec 2018 14:16:10 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Mon, 3 Dec 2018 15:15:09 +0100 Message-Id: <20181203141515.28368-4-jfreimann@redhat.com> In-Reply-To: <20181203141515.28368-1-jfreimann@redhat.com> References: <20181203141515.28368-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Mon, 03 Dec 2018 14:16:33 +0000 (UTC) Subject: [dpdk-dev] [PATCH v11 3/9] net/virtio: add packed virtqueue helpers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Dec 2018 14:16:34 -0000 Add helper functions to set/clear and check descriptor flags. Signed-off-by: Jens Freimann --- drivers/net/virtio/virtio_ethdev.c | 2 + drivers/net/virtio/virtqueue.h | 73 +++++++++++++++++++++++++++++- 2 files changed, 73 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index fdcb7ecaa..48707b7b8 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -310,6 +310,7 @@ virtio_init_vring(struct virtqueue *vq) if (vtpci_packed_queue(vq->hw)) { vring_init_packed(&vq->ring_packed, ring_mem, VIRTIO_PCI_VRING_ALIGN, size); vring_desc_init_packed(vq, size); + virtqueue_disable_intr_packed(vq); } else { vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size); vring_desc_init_split(vr->desc, size); @@ -383,6 +384,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) vq->hw = hw; vq->vq_queue_index = vtpci_queue_idx; vq->vq_nentries = vq_size; + vq->event_flags_shadow = 0; if (vtpci_packed_queue(hw)) { vq->avail_wrap_counter = 1; vq->used_wrap_counter = 1; diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 1401a9844..20b42f5fb 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -170,6 +170,7 @@ struct virtqueue { struct vring_packed ring_packed; /**< vring keeping desc, used and avail */ bool avail_wrap_counter; bool used_wrap_counter; + uint16_t event_flags_shadow; /** * Last consumed descriptor in the used table, @@ -250,6 +251,32 @@ struct virtio_tx_region { __attribute__((__aligned__(16))); }; +static inline void +_set_desc_avail(struct vring_packed_desc *desc, int wrap_counter) +{ + desc->flags |= VRING_DESC_F_AVAIL(wrap_counter) | + VRING_DESC_F_USED(!wrap_counter); +} + +static inline void +set_desc_avail(struct virtqueue *vq, struct vring_packed_desc *desc) +{ + _set_desc_avail(desc, vq->avail_wrap_counter); +} + +static inline int +desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq) +{ + uint16_t used, avail, flags; + + flags = desc->flags; + used = !!(flags & VRING_DESC_F_USED(1)); + avail = !!(flags & VRING_DESC_F_AVAIL(1)); + + return avail == used && used == vq->used_wrap_counter; +} + + static inline void vring_desc_init_packed(struct virtqueue *vq, int n) { @@ -273,22 +300,64 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n) dp[i].next = VQ_RING_DESC_CHAIN_END; } +/** + * Tell the backend not to interrupt us. + */ +static inline void +virtqueue_disable_intr_packed(struct virtqueue *vq) +{ + uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; + + if (*event_flags != RING_EVENT_FLAGS_DISABLE) { + *event_flags = RING_EVENT_FLAGS_DISABLE; + } +} + + /** * Tell the backend not to interrupt us. */ static inline void virtqueue_disable_intr(struct virtqueue *vq) { - vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; + if (vtpci_packed_queue(vq->hw)) + virtqueue_disable_intr_packed(vq); + else + vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; +} + +/** + * Tell the backend to interrupt us. + */ +static inline void +virtqueue_enable_intr_packed(struct virtqueue *vq) +{ + uint16_t *off_wrap = &vq->ring_packed.driver_event->desc_event_off_wrap; + uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; + + *off_wrap = vq->vq_used_cons_idx | + ((uint16_t)(vq->used_wrap_counter << 15)); + + if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) { + virtio_wmb(); + vq->event_flags_shadow = + vtpci_with_feature(vq->hw, VIRTIO_RING_F_EVENT_IDX) ? + RING_EVENT_FLAGS_DESC : RING_EVENT_FLAGS_ENABLE; + *event_flags = vq->event_flags_shadow; + } } + /** * Tell the backend to interrupt us. */ static inline void virtqueue_enable_intr(struct virtqueue *vq) { - vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT); + if (vtpci_packed_queue(vq->hw)) + virtqueue_enable_intr_packed(vq); + else + vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT); } /** -- 2.17.2