From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 6DBA81BEAF for ; Sat, 30 Jun 2018 18:05:14 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jun 2018 09:05:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,291,1526367600"; d="scan'208";a="67643550" Received: from dpdk-test32.sh.intel.com ([10.67.119.193]) by fmsmga004.fm.intel.com with ESMTP; 30 Jun 2018 09:05:12 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com Cc: zhihong.wang@intel.com, dev@dpdk.org, Marvin Liu Date: Sun, 1 Jul 2018 07:50:46 +0800 Message-Id: <20180630235049.62610-7-yong.liu@intel.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180630235049.62610-1-yong.liu@intel.com> References: <20180630235049.62610-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v4 6/9] net/virtio: extract common part for in-order functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 30 Jun 2018 16:05:15 -0000 IN_ORDER virtio-user Tx function support Tx checksum offloading and TSO which also support on normal Tx function. So extracts common part into separated function for reuse. Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 0bca29855..e9b1b496e 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -246,6 +246,55 @@ tx_offload_enabled(struct virtio_hw *hw) (var) = (val); \ } while (0) +static inline void +virtqueue_xmit_offload(struct virtio_net_hdr *hdr, + struct rte_mbuf *cookie, + int offload) +{ + if (offload) { + if (cookie->ol_flags & PKT_TX_TCP_SEG) + cookie->ol_flags |= PKT_TX_TCP_CKSUM; + + switch (cookie->ol_flags & PKT_TX_L4_MASK) { + case PKT_TX_UDP_CKSUM: + hdr->csum_start = cookie->l2_len + cookie->l3_len; + hdr->csum_offset = offsetof(struct udp_hdr, + dgram_cksum); + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; + break; + + case PKT_TX_TCP_CKSUM: + hdr->csum_start = cookie->l2_len + cookie->l3_len; + hdr->csum_offset = offsetof(struct tcp_hdr, cksum); + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; + break; + + default: + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); + ASSIGN_UNLESS_EQUAL(hdr->flags, 0); + break; + } + + /* TCP Segmentation Offload */ + if (cookie->ol_flags & PKT_TX_TCP_SEG) { + virtio_tso_fix_cksum(cookie); + hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? + VIRTIO_NET_HDR_GSO_TCPV6 : + VIRTIO_NET_HDR_GSO_TCPV4; + hdr->gso_size = cookie->tso_segsz; + hdr->hdr_len = + cookie->l2_len + + cookie->l3_len + + cookie->l4_len; + } else { + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); + } + } +} + static inline void virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, uint16_t needed, int use_indirect, int can_push) @@ -315,49 +364,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, idx = start_dp[idx].next; } - /* Checksum Offload / TSO */ - if (offload) { - if (cookie->ol_flags & PKT_TX_TCP_SEG) - cookie->ol_flags |= PKT_TX_TCP_CKSUM; - - switch (cookie->ol_flags & PKT_TX_L4_MASK) { - case PKT_TX_UDP_CKSUM: - hdr->csum_start = cookie->l2_len + cookie->l3_len; - hdr->csum_offset = offsetof(struct udp_hdr, - dgram_cksum); - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - break; - - case PKT_TX_TCP_CKSUM: - hdr->csum_start = cookie->l2_len + cookie->l3_len; - hdr->csum_offset = offsetof(struct tcp_hdr, cksum); - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - break; - - default: - ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); - ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); - ASSIGN_UNLESS_EQUAL(hdr->flags, 0); - break; - } - - /* TCP Segmentation Offload */ - if (cookie->ol_flags & PKT_TX_TCP_SEG) { - virtio_tso_fix_cksum(cookie); - hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? - VIRTIO_NET_HDR_GSO_TCPV6 : - VIRTIO_NET_HDR_GSO_TCPV4; - hdr->gso_size = cookie->tso_segsz; - hdr->hdr_len = - cookie->l2_len + - cookie->l3_len + - cookie->l4_len; - } else { - ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); - ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); - ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); - } - } + virtqueue_xmit_offload(hdr, cookie, offload); do { start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); @@ -621,6 +628,15 @@ virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf) } } +static inline void +virtio_rx_stats_updated(struct virtnet_rx *rxvq, struct rte_mbuf *m) +{ + VIRTIO_DUMP_PACKET(m, m->data_len); + + rxvq->stats.bytes += m->pkt_len; + virtio_update_packet_stats(&rxvq->stats, m); +} + /* Optionally fill offload information in structure */ static int virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) @@ -773,12 +789,9 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) continue; } - VIRTIO_DUMP_PACKET(rxm, rxm->data_len); + virtio_rx_stats_updated(rxvq, rxm); rx_pkts[nb_rx++] = rxm; - - rxvq->stats.bytes += rxm->pkt_len; - virtio_update_packet_stats(&rxvq->stats, rxm); } rxvq->stats.packets += nb_rx; -- 2.17.0