From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 506D7A0C43; Mon, 18 Oct 2021 10:20:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1197340141; Mon, 18 Oct 2021 10:20:27 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 4F7B04003C; Mon, 18 Oct 2021 10:20:26 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id C3EE57F508; Mon, 18 Oct 2021 11:20:25 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru C3EE57F508 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1634545225; bh=LRn91KIFIkSXzpOyJRg7jt5/cVEiyZknKU0Q7di4ceI=; h=Subject:To:Cc:References:From:Date:In-Reply-To; b=bhwSXfH+8SOLLtbMZ5ka6/aitCzerVc6iEY4WJP1v2t0U4x8eajUvCFUdpeKyuybc 5eqe5zMIHvSpqGhvkZHaHZt0omltiM7NV/S+Ti6wCt3ggVyY8NHsLwrAlT/bjEXztU zUWyFmvEyzgkCWLehsseDSrMKGJOjUiUseMG3OUY= To: Olivier Matz Cc: "Xia, Chenbo" , Ivan Malov , "dev@dpdk.org" , Maxime Coquelin , "stable@dpdk.org" , Yuanhan Liu References: <20210830142655.18373-1-ivan.malov@oktetlabs.ru> <20210916184955.2755-1-ivan.malov@oktetlabs.ru> From: Andrew Rybchenko Organization: OKTET Labs Message-ID: <42d842c8-d0e9-f9a3-495c-0a693f146a32@oktetlabs.ru> Date: Mon, 18 Oct 2021 11:20:25 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] net/virtio: handle Tx checksums correctly for tunnel packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/18/21 10:54 AM, Olivier Matz wrote: > On Mon, Oct 18, 2021 at 10:04:46AM +0300, Andrew Rybchenko wrote: >> On 10/15/21 11:32 AM, Olivier Matz wrote: >>> On Thu, Oct 14, 2021 at 07:12:29AM +0000, Xia, Chenbo wrote: >>>>> -----Original Message----- >>>>> From: Ivan Malov >>>>> Sent: Friday, September 17, 2021 2:50 AM >>>>> To: dev@dpdk.org >>>>> Cc: Maxime Coquelin ; stable@dpdk.org; Andrew >>>>> Rybchenko ; Xia, Chenbo ; >>>>> Yuanhan Liu ; Olivier Matz >>>>> >>>>> Subject: [PATCH v2] net/virtio: handle Tx checksums correctly for tunnel >>>>> packets >>>>> >>>>> Tx prepare method calls rte_net_intel_cksum_prepare(), which >>>>> handles tunnel packets correctly, but Tx burst path does not >>>>> take tunnel presence into account when computing the offsets. >>>>> >>>>> Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload") >>>>> Cc: stable@dpdk.org >>>>> >>>>> Signed-off-by: Ivan Malov >>>>> Reviewed-by: Andrew Rybchenko >>>>> --- >>>>> drivers/net/virtio/virtqueue.h | 9 ++++++--- >>>>> 1 file changed, 6 insertions(+), 3 deletions(-) >>>>> >>>>> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h >>>>> index 03957b2bd0..b83ff32efb 100644 >>>>> --- a/drivers/net/virtio/virtqueue.h >>>>> +++ b/drivers/net/virtio/virtqueue.h >>>>> @@ -620,19 +620,21 @@ static inline void >>>>> virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie) >>>>> { >>>>> uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK; >>>>> + uint16_t o_l23_len = (cookie->ol_flags & PKT_TX_TUNNEL_MASK) ? >>>>> + cookie->outer_l2_len + cookie->outer_l3_len : 0; >>>>> >>>>> if (cookie->ol_flags & PKT_TX_TCP_SEG) >>>>> csum_l4 |= PKT_TX_TCP_CKSUM; >>>>> >>>>> switch (csum_l4) { >>>>> case PKT_TX_UDP_CKSUM: >>>>> - hdr->csum_start = cookie->l2_len + cookie->l3_len; >>>>> + hdr->csum_start = o_l23_len + cookie->l2_len + cookie->l3_len; >>>>> hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum); >>>>> hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; >>>>> break; >>>>> >>>>> case PKT_TX_TCP_CKSUM: >>>>> - hdr->csum_start = cookie->l2_len + cookie->l3_len; >>>>> + hdr->csum_start = o_l23_len + cookie->l2_len + cookie->l3_len; >>>>> hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum); >>>>> hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; >>>>> break; >>>>> @@ -650,7 +652,8 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct >>>>> rte_mbuf *cookie) >>>>> VIRTIO_NET_HDR_GSO_TCPV6 : >>>>> VIRTIO_NET_HDR_GSO_TCPV4; >>>>> hdr->gso_size = cookie->tso_segsz; >>>>> - hdr->hdr_len = cookie->l2_len + cookie->l3_len + cookie->l4_len; >>>>> + hdr->hdr_len = o_l23_len + cookie->l2_len + cookie->l3_len + >>>>> + cookie->l4_len; >>>>> } else { >>>>> ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); >>>>> ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); >>>>> -- >>>>> 2.20.1 >>>> >>>> Reviewed-by: Chenbo Xia >>>> >>> >>> I have one comment to mention that from application perspective, it has >>> to take care that the driver does not support outer tunnel offload (this >>> matches the advertised capabilities). For instance, in case of a vxlan >>> tunnel, if the outer checksum needs to be calculated, it has to be done >>> by the application. In short, the application can ask to offload the >>> inner part if no offload is required on the outer part. >> >> Yes. It is really complicated. >> >> Let's name below case (A). >> >>> Also, since grep "PKT_TX_TUNNEL" in driver/net/ixgbe gives nothing, it >>> seems the ixgbe driver does not support the same offload request than >>> described in this patch: >>> (m->ol_flags & PKT_TX_TUNNEL_MASK) == PKT_TX_TUNNEL_XXXXX >>> m->outer_l2_len = outer l2 length >>> m->outer_l3_len = outer l3 length >>> m->l2_len = outer l4 length + tunnel len + inner l2 len >>> m->l3_len = inner l3 len >>> m->l4_len = inner l4 len >> >> True. >> >> Let's name below case (B). >> >>> An alternative for doing the same (that would work with ixgbe and >>> current virtio) is to give: >>> (m->ol_flags & PKT_TX_TUNNEL_MASK) == 0 >>> m->l2_len = outer lengths + tunnel len + inner l2 len >>> m->l3_len = inner l3 len >>> m->l4_len = inner l4 len >> >> Possible, but I think it is a bit less friendly. >> If application supports both cases with and without >> outer IPv4 checksum offload, it will require more extra >> steps from the application to remove tunnel mask and >> collapse outer lengths into l2_len (in addition outer >> IPv4 checksum calculation in SW). >> >>> I think a capability may be missing to differentiate which drivers >>> support which mode. >> >> Yes, something like TX_INNER_CKSUM. >> Some drivers already support it, some do not. >> > Or, all drivers could be fixed to support both modes >>> (and this would make this patch valid). >> >> I disagree with the point that something is required to >> make the patch valid. Before the patch the driver supports >> just (B). After the patch the driver supports both (A) and (B). >> It makes the driver to support more cases. > > Yes, I misspoke, I wanted to say that today it is difficult for an > application to use this new mode because it does not know which driver > supports it. Fixing all drivers would solve the issue without a > capability. > > A drivers/hw supporting one mode can easily support the other one (this > patch demonstrates it), so maybe it's not worth having a capability. Good point. I'll keep it in mind and try to find time to take a look at other drivers as well. > > But yes, this is not a good reason to reject the patch. > > So: > Reviewed-by: Olivier Matz > > >> Please, note that rte_net_intel_cksum_prepare() supports >> both (A) and (B) as well.