From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f181.google.com (mail-pf0-f181.google.com [209.85.192.181]) by dpdk.org (Postfix) with ESMTP id B64B52B96 for ; Fri, 4 Mar 2016 19:19:10 +0100 (CET) Received: by mail-pf0-f181.google.com with SMTP id 4so39344792pfd.1 for ; Fri, 04 Mar 2016 10:19:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=i2albjH4gsc8gbW0CXUjFH5IvRsIyI1Poyx/d9muX/k=; b=VuO3BFk/xa75qqrbijdju1nqwFPpmB64vDzzebxuCFoVYr2b0zonPSIBW/eEYPULWR /AsoveyYY64oHZlMJWz2wYTtDstzD+OcrcEVkCgB8e9AUyg4qU7n3FjXmJaZ5HeTpDx3 E3tBoJg6uyzwL09L6xtJ2NcoIKPPwhI4Bi4QchOOfkiumYPU8WHjqya+XSe0HBdyX9AH ROJ9nIW4nqfOOgum7WMQb0mwxZty7fXm5Ozqh9IxZnxuC1l4e9gl9Oe3853BXLnSY9l9 0TVlsaw24P7HDddjRjaBnGGcslkw70CiFjnqxbM3CpNRh8ttxjlTYzkKEzzOXqhzYIv0 Arew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=i2albjH4gsc8gbW0CXUjFH5IvRsIyI1Poyx/d9muX/k=; b=A8mJTv3SEo68vY2arTY2udVbbBfukD+d+vRC7weDZMcFmBz2Deg8YWbLaXbL/6gJKF lHcHxVlrBD++y5ADEdUc7JigPS6S1Vf898qJ2M1ElAuvL9PzcEGFres2iFzPuXZehPN8 LFgRBVxVbdCgzkokqejVXRz7gL9cXN3jNoMV8o0q/hgjbk16Bpg8DmZE38tqzfPnpO09 f5lqLCOZBLqRfTH/Fra+dwqcS9/Lf7Ycoo4YkPMv2bXetZjrJUPO3M1+l6tfuiwA42vz q/Q4OtcK2P4sjCsCt7S9ZS6chL5jyEwsZJI2+V3iW+9wcACOC0g57TlesNZPd+97ZgV1 nOZA== X-Gm-Message-State: AD7BkJJP93aP7A5tfQPA8hT8YLwWP99H2IArcHuAZQFf9rEb08+Nc8KCzW0e5D7CwhLqXg== X-Received: by 10.98.71.197 with SMTP id p66mr14060826pfi.130.1457115550177; Fri, 04 Mar 2016 10:19:10 -0800 (PST) Received: from xeon-e3.home.lan (static-50-53-82-155.bvtn.or.frontiernet.net. [50.53.82.155]) by smtp.gmail.com with ESMTPSA id qy7sm7040857pab.34.2016.03.04.10.19.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 04 Mar 2016 10:19:09 -0800 (PST) From: Stephen Hemminger To: dev@dpdk.org Date: Fri, 4 Mar 2016 10:19:20 -0800 Message-Id: <1457115561-31186-3-git-send-email-stephen@networkplumber.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1457115561-31186-1-git-send-email-stephen@networkplumber.org> References: <1457115561-31186-1-git-send-email-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 2/3] virtio: use any layout on transmit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Mar 2016 18:19:11 -0000 Virtio supports a feature that allows sender to put transmit header prepended to data. It requires that the mbuf be writeable, correct alignment, and the feature has been negotiatied. If all this works out, then it will be the optimum way to transmit a single segment packet. Signed-off-by: Stephen Hemminger --- drivers/net/virtio/virtio_rxtx.c | 73 +++++++++++++++++++++++++--------------- 1 file changed, 46 insertions(+), 27 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 5fe3eec..0f12e64 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -210,13 +210,13 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie) static int virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, - int use_indirect) + uint16_t needed, int use_indirect, int can_push) { struct vq_desc_extra *dxp; struct vring_desc *start_dp; uint16_t seg_num = cookie->nb_segs; - uint16_t needed = use_indirect ? 1 : 1 + seg_num; uint16_t head_idx, idx; + uint16_t head_size = txvq->hw->vtnet_hdr_size; unsigned long offs; if (unlikely(txvq->vq_free_cnt == 0)) @@ -234,7 +234,12 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, start_dp = txvq->vq_ring.desc; - if (use_indirect) { + if (can_push) { + /* put on zero'd transmit header (no offloads) */ + void *hdr = rte_pktmbuf_prepend(cookie, head_size); + + memset(hdr, 0, head_size); + } else if (use_indirect) { /* setup tx ring slot to point to indirect * descriptor list stored in reserved region. * @@ -252,7 +257,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, /* loop below will fill in rest of the indirect elements */ start_dp = txr[idx].tx_indir; - idx = 0; + idx = 1; } else { /* setup first tx ring slot to point to header * stored in reserved region. @@ -263,22 +268,20 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, start_dp[idx].addr = txvq->virtio_net_hdr_mem + offs; start_dp[idx].len = txvq->hw->vtnet_hdr_size; start_dp[idx].flags = VRING_DESC_F_NEXT; + idx = start_dp[idx].next; } - for (; ((seg_num > 0) && (cookie != NULL)); seg_num--) { - idx = start_dp[idx].next; + do { start_dp[idx].addr = rte_mbuf_data_dma_addr(cookie); start_dp[idx].len = cookie->data_len; - start_dp[idx].flags = VRING_DESC_F_NEXT; - cookie = cookie->next; - } + start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0; + idx = start_dp[idx].next; + } while ((cookie = cookie->next) != NULL); start_dp[idx].flags &= ~VRING_DESC_F_NEXT; if (use_indirect) idx = txvq->vq_ring.desc[head_idx].next; - else - idx = start_dp[idx].next; txvq->vq_desc_head_idx = idx; if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) @@ -859,10 +862,13 @@ virtio_recv_mergeable_pkts(void *rx_queue, return nb_rx; } + uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct virtqueue *txvq = tx_queue; + struct virtio_hw *hw = txvq->hw; + uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_used, nb_tx; int error; @@ -878,14 +884,35 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { struct rte_mbuf *txm = tx_pkts[nb_tx]; - int use_indirect, slots, need; + int can_push = 0, use_indirect = 0, slots, need; - use_indirect = vtpci_with_feature(txvq->hw, - VIRTIO_RING_F_INDIRECT_DESC) - && (txm->nb_segs < VIRTIO_MAX_TX_INDIRECT); + /* Do VLAN tag insertion */ + if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { + error = rte_vlan_insert(&txm); + if (unlikely(error)) { + rte_pktmbuf_free(txm); + continue; + } + } - /* How many main ring entries are needed to this Tx? */ - slots = use_indirect ? 1 : 1 + txm->nb_segs; + /* optimize ring usage */ + if (vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) && + rte_mbuf_refcnt_read(txm) == 1 && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= hdr_size && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + __alignof__(struct virtio_net_hdr_mrg_rxbuf))) + can_push = 1; + else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) && + txm->nb_segs < VIRTIO_MAX_TX_INDIRECT) + use_indirect = 1; + + /* How many main ring entries are needed to this Tx? + * any_layout => number of segments + * indirect => 1 + * default => number of segments + 1 + */ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); need = slots - txvq->vq_free_cnt; /* Positive value indicates it need free vring descriptors */ @@ -903,17 +930,9 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Do VLAN tag insertion */ - if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - } - /* Enqueue Packet buffers */ - error = virtqueue_enqueue_xmit(txvq, txm, use_indirect); + error = virtqueue_enqueue_xmit(txvq, txm, slots, + use_indirect, can_push); if (unlikely(error)) { if (error == ENOSPC) PMD_TX_LOG(ERR, "virtqueue_enqueue Free count = 0"); -- 2.1.4