From: Ruifeng Wang <Ruifeng.Wang@arm.com>
To: David Marchand <david.marchand@redhat.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
"fbl@sysclose.org" <fbl@sysclose.org>,
"i.maximets@ovn.org" <i.maximets@ovn.org>,
Chenbo Xia <chenbo.xia@intel.com>,
Bruce Richardson <bruce.richardson@intel.com>,
Konstantin Ananyev <konstantin.ananyev@intel.com>,
"jerinj@marvell.com" <jerinj@marvell.com>, nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH 4/5] net/virtio: refactor Tx offload helper
Date: Fri, 9 Apr 2021 02:31:52 +0000 [thread overview]
Message-ID: <AM5PR0802MB2465D30CDF77F7C6AD36D6F49E739@AM5PR0802MB2465.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <20210401095243.18211-5-david.marchand@redhat.com>
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Thursday, April 1, 2021 5:53 PM
> To: dev@dpdk.org
> Cc: maxime.coquelin@redhat.com; olivier.matz@6wind.com;
> fbl@sysclose.org; i.maximets@ovn.org; Chenbo Xia <chenbo.xia@intel.com>;
> Bruce Richardson <bruce.richardson@intel.com>; Konstantin Ananyev
> <konstantin.ananyev@intel.com>; jerinj@marvell.com; Ruifeng Wang
> <Ruifeng.Wang@arm.com>
> Subject: [PATCH 4/5] net/virtio: refactor Tx offload helper
>
> Purely cosmetic but it is rather odd to have an "offload" helper that checks if
> it actually must do something.
> We already have the same checks in most callers, so move this branch in
> them.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> drivers/net/virtio/virtio_rxtx.c | 7 +-
> drivers/net/virtio/virtio_rxtx_packed_avx.h | 2 +-
> drivers/net/virtio/virtio_rxtx_packed_neon.h | 2 +-
> drivers/net/virtio/virtqueue.h | 83 +++++++++-----------
> 4 files changed, 44 insertions(+), 50 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index 40283001b0..a4e37ef379 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -448,7 +448,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx
> *txvq,
> if (!vq->hw->has_tx_offload)
> virtqueue_clear_net_hdr(hdr);
> else
> - virtqueue_xmit_offload(hdr, cookies[i], true);
> + virtqueue_xmit_offload(hdr, cookies[i]);
>
> start_dp[idx].addr = rte_mbuf_data_iova(cookies[i]) -
> head_size;
> start_dp[idx].len = cookies[i]->data_len + head_size;
> @@ -495,7 +495,7 @@ virtqueue_enqueue_xmit_packed_fast(struct
> virtnet_tx *txvq,
> if (!vq->hw->has_tx_offload)
> virtqueue_clear_net_hdr(hdr);
> else
> - virtqueue_xmit_offload(hdr, cookie, true);
> + virtqueue_xmit_offload(hdr, cookie);
>
> dp->addr = rte_mbuf_data_iova(cookie) - head_size;
> dp->len = cookie->data_len + head_size; @@ -581,7 +581,8 @@
> virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> idx = start_dp[idx].next;
> }
>
> - virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);
> + if (vq->hw->has_tx_offload)
> + virtqueue_xmit_offload(hdr, cookie);
>
> do {
> start_dp[idx].addr = rte_mbuf_data_iova(cookie); diff --git
> a/drivers/net/virtio/virtio_rxtx_packed_avx.h
> b/drivers/net/virtio/virtio_rxtx_packed_avx.h
> index 49e845d02a..33cac3244f 100644
> --- a/drivers/net/virtio/virtio_rxtx_packed_avx.h
> +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.h
> @@ -115,7 +115,7 @@ virtqueue_enqueue_batch_packed_vec(struct
> virtnet_tx *txvq,
> virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> hdr = rte_pktmbuf_mtod_offset(tx_pkts[i],
> struct virtio_net_hdr *, -head_size);
> - virtqueue_xmit_offload(hdr, tx_pkts[i], true);
> + virtqueue_xmit_offload(hdr, tx_pkts[i]);
> }
> }
>
> diff --git a/drivers/net/virtio/virtio_rxtx_packed_neon.h
> b/drivers/net/virtio/virtio_rxtx_packed_neon.h
> index 851c81f312..1a49caf8af 100644
> --- a/drivers/net/virtio/virtio_rxtx_packed_neon.h
> +++ b/drivers/net/virtio/virtio_rxtx_packed_neon.h
> @@ -134,7 +134,7 @@ virtqueue_enqueue_batch_packed_vec(struct
> virtnet_tx *txvq,
> virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> hdr = rte_pktmbuf_mtod_offset(tx_pkts[i],
> struct virtio_net_hdr *, -head_size);
> - virtqueue_xmit_offload(hdr, tx_pkts[i], true);
> + virtqueue_xmit_offload(hdr, tx_pkts[i]);
> }
> }
>
> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> index 2e8826bc28..41a9b82a5f 100644
> --- a/drivers/net/virtio/virtqueue.h
> +++ b/drivers/net/virtio/virtqueue.h
> @@ -613,52 +613,44 @@ virtqueue_notify(struct virtqueue *vq) } while (0)
>
> static inline void
> -virtqueue_xmit_offload(struct virtio_net_hdr *hdr,
> - struct rte_mbuf *cookie,
> - uint8_t offload)
> +virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf
> +*cookie)
> {
> - if (offload) {
> - uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK;
> -
> - if (cookie->ol_flags & PKT_TX_TCP_SEG)
> - csum_l4 |= PKT_TX_TCP_CKSUM;
> -
> - switch (csum_l4) {
> - case PKT_TX_UDP_CKSUM:
> - hdr->csum_start = cookie->l2_len + cookie->l3_len;
> - hdr->csum_offset = offsetof(struct rte_udp_hdr,
> - dgram_cksum);
> - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
> - break;
> -
> - case PKT_TX_TCP_CKSUM:
> - hdr->csum_start = cookie->l2_len + cookie->l3_len;
> - hdr->csum_offset = offsetof(struct rte_tcp_hdr,
> cksum);
> - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
> - break;
> -
> - default:
> - ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
> - ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
> - ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
> - break;
> - }
> + uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK;
> +
> + if (cookie->ol_flags & PKT_TX_TCP_SEG)
> + csum_l4 |= PKT_TX_TCP_CKSUM;
> +
> + switch (csum_l4) {
> + case PKT_TX_UDP_CKSUM:
> + hdr->csum_start = cookie->l2_len + cookie->l3_len;
> + hdr->csum_offset = offsetof(struct rte_udp_hdr,
> dgram_cksum);
> + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
> + break;
> +
> + case PKT_TX_TCP_CKSUM:
> + hdr->csum_start = cookie->l2_len + cookie->l3_len;
> + hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum);
> + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
> + break;
> +
> + default:
> + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->flags, 0);
> + break;
> + }
>
> - /* TCP Segmentation Offload */
> - if (cookie->ol_flags & PKT_TX_TCP_SEG) {
> - hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
> - VIRTIO_NET_HDR_GSO_TCPV6 :
> - VIRTIO_NET_HDR_GSO_TCPV4;
> - hdr->gso_size = cookie->tso_segsz;
> - hdr->hdr_len =
> - cookie->l2_len +
> - cookie->l3_len +
> - cookie->l4_len;
> - } else {
> - ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
> - ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
> - ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
> - }
> + /* TCP Segmentation Offload */
> + if (cookie->ol_flags & PKT_TX_TCP_SEG) {
> + hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ?
> + VIRTIO_NET_HDR_GSO_TCPV6 :
> + VIRTIO_NET_HDR_GSO_TCPV4;
> + hdr->gso_size = cookie->tso_segsz;
> + hdr->hdr_len = cookie->l2_len + cookie->l3_len + cookie-
> >l4_len;
> + } else {
> + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0);
> + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0);
> }
> }
>
> @@ -737,7 +729,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx
> *txvq, struct rte_mbuf *cookie,
> }
> }
>
> - virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload);
> + if (vq->hw->has_tx_offload)
> + virtqueue_xmit_offload(hdr, cookie);
>
> do {
> uint16_t flags;
> --
> 2.23.0
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
next prev parent reply other threads:[~2021-04-09 2:32 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-01 9:52 [dpdk-dev] [PATCH 0/5] Offload flags fixes David Marchand
2021-04-01 9:52 ` [dpdk-dev] [PATCH 1/5] mbuf: mark old offload flag as deprecated David Marchand
2021-04-07 20:14 ` Flavio Leitner
2021-04-08 7:23 ` Olivier Matz
2021-04-08 8:41 ` David Marchand
2021-04-01 9:52 ` [dpdk-dev] [PATCH 2/5] net/tap: do not touch Tx offload flags David Marchand
2021-04-07 20:15 ` Flavio Leitner
2021-04-08 7:41 ` Olivier Matz
2021-04-08 11:21 ` Flavio Leitner
2021-04-08 12:05 ` Olivier Matz
2021-04-08 12:58 ` Flavio Leitner
2021-04-09 13:30 ` Olivier Matz
2021-04-09 16:55 ` Flavio Leitner
2021-04-28 12:17 ` David Marchand
2021-04-08 12:16 ` Ananyev, Konstantin
2021-04-08 7:53 ` Olivier Matz
2021-04-28 12:12 ` David Marchand
2021-04-01 9:52 ` [dpdk-dev] [PATCH 3/5] net/virtio: " David Marchand
2021-04-13 14:17 ` Maxime Coquelin
2021-04-01 9:52 ` [dpdk-dev] [PATCH 4/5] net/virtio: refactor Tx offload helper David Marchand
2021-04-08 13:05 ` Flavio Leitner
2021-04-09 2:31 ` Ruifeng Wang [this message]
2021-04-01 9:52 ` [dpdk-dev] [PATCH 5/5] vhost: fix offload flags in Rx path David Marchand
2021-04-08 8:28 ` Olivier Matz
2021-04-08 18:38 ` Flavio Leitner
2021-04-13 15:27 ` Maxime Coquelin
2021-04-27 17:09 ` David Marchand
2021-04-27 17:19 ` David Marchand
2021-04-29 8:04 ` [dpdk-dev] [PATCH v2 0/4] Offload flags fixes David Marchand
2021-04-29 8:04 ` [dpdk-dev] [PATCH v2 1/4] mbuf: mark old offload flag as deprecated David Marchand
2021-04-29 12:14 ` Lance Richardson
2021-04-29 16:45 ` Ajit Khaparde
2021-04-29 8:04 ` [dpdk-dev] [PATCH v2 2/4] net/virtio: do not touch Tx offload flags David Marchand
2021-04-29 13:51 ` Flavio Leitner
2021-04-29 8:04 ` [dpdk-dev] [PATCH v2 3/4] net/virtio: refactor Tx offload helper David Marchand
2021-04-29 12:59 ` Maxime Coquelin
2021-04-29 8:04 ` [dpdk-dev] [PATCH v2 4/4] vhost: fix offload flags in Rx path David Marchand
2021-04-29 13:30 ` Maxime Coquelin
2021-04-29 13:31 ` Maxime Coquelin
2021-04-29 20:21 ` David Marchand
2021-04-30 8:38 ` Maxime Coquelin
2021-04-29 20:09 ` David Marchand
2021-04-29 18:39 ` Flavio Leitner
2021-04-29 19:18 ` David Marchand
2021-05-03 13:26 ` [dpdk-dev] [PATCH v3 0/4] Offload flags fixes David Marchand
2021-05-03 13:26 ` [dpdk-dev] [PATCH v3 1/4] mbuf: mark old offload flag as deprecated David Marchand
2021-05-03 14:02 ` Maxime Coquelin
2021-05-03 14:12 ` David Marchand
2021-05-03 13:26 ` [dpdk-dev] [PATCH v3 2/4] net/virtio: do not touch Tx offload flags David Marchand
2021-05-03 13:26 ` [dpdk-dev] [PATCH v3 3/4] net/virtio: refactor Tx offload helper David Marchand
2021-05-03 13:26 ` [dpdk-dev] [PATCH v3 4/4] vhost: fix offload flags in Rx path David Marchand
2021-05-03 15:24 ` [dpdk-dev] [PATCH v3 0/4] Offload flags fixes Maxime Coquelin
2021-05-03 16:21 ` David Marchand
2021-05-03 16:43 ` [dpdk-dev] [PATCH v4 0/3] " David Marchand
2021-05-03 16:43 ` [dpdk-dev] [PATCH v4 1/3] net/virtio: do not touch Tx offload flags David Marchand
2021-05-03 16:43 ` [dpdk-dev] [PATCH v4 2/3] net/virtio: refactor Tx offload helper David Marchand
2021-05-03 16:43 ` [dpdk-dev] [PATCH v4 3/3] vhost: fix offload flags in Rx path David Marchand
2021-05-04 11:07 ` Flavio Leitner
2021-05-08 6:24 ` Wang, Yinan
2021-05-12 3:29 ` Wang, Yinan
2021-05-12 15:20 ` David Marchand
2021-05-13 6:34 ` Wang, Yinan
2021-05-04 8:29 ` [dpdk-dev] [PATCH v4 0/3] Offload flags fixes Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM5PR0802MB2465D30CDF77F7C6AD36D6F49E739@AM5PR0802MB2465.eurprd08.prod.outlook.com \
--to=ruifeng.wang@arm.com \
--cc=bruce.richardson@intel.com \
--cc=chenbo.xia@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=fbl@sysclose.org \
--cc=i.maximets@ovn.org \
--cc=jerinj@marvell.com \
--cc=konstantin.ananyev@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=nd@arm.com \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).