From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 96E883F9 for ; Wed, 26 Nov 2014 10:59:25 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 26 Nov 2014 02:07:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,462,1413270000"; d="scan'208";a="643774568" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by orsmga002.jf.intel.com with ESMTP; 26 Nov 2014 02:10:18 -0800 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.144]) by IRSMSX153.ger.corp.intel.com ([163.33.192.75]) with mapi id 14.03.0195.001; Wed, 26 Nov 2014 10:10:14 +0000 From: "Ananyev, Konstantin" To: Olivier Matz , "dev@dpdk.org" Thread-Topic: [PATCH v3 08/13] testpmd: rework csum forward engine Thread-Index: AQHQBRWkd4sswgzhkE+OlOkQVeBEkZxxa1hQ Date: Wed, 26 Nov 2014 10:10:14 +0000 Message-ID: <2601191342CEEE43887BDE71AB977258213BA62A@IRSMSX105.ger.corp.intel.com> References: <1415984609-2484-1-git-send-email-olivier.matz@6wind.com> <1416524335-22753-1-git-send-email-olivier.matz@6wind.com> <1416524335-22753-9-git-send-email-olivier.matz@6wind.com> In-Reply-To: <1416524335-22753-9-git-send-email-olivier.matz@6wind.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "jigsaw@gmail.com" Subject: Re: [dpdk-dev] [PATCH v3 08/13] testpmd: rework csum forward engine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Nov 2014 09:59:30 -0000 Hi Oliver, > -----Original Message----- > From: Olivier Matz [mailto:olivier.matz@6wind.com] > Sent: Thursday, November 20, 2014 10:59 PM > To: dev@dpdk.org > Cc: olivier.matz@6wind.com; Walukiewicz, Miroslaw; Liu, Jijiang; Liu, Yon= g; jigsaw@gmail.com; Richardson, Bruce; Ananyev, Konstantin > Subject: [PATCH v3 08/13] testpmd: rework csum forward engine >=20 > The csum forward engine was becoming too complex to be used and > extended (the next commits want to add the support of TSO): >=20 > - no explaination about what the code does > - code is not factorized, lots of code duplicated, especially between > ipv4/ipv6 > - user command line api: use of bitmasks that need to be calculated by > the user > - the user flags don't have the same semantic: > - for legacy IP/UDP/TCP/SCTP, it selects software or hardware checksum > - for other (vxlan), it selects between hardware checksum or no > checksum > - the code relies too much on flags set by the driver without software > alternative (ex: PKT_RX_TUNNEL_IPV4_HDR). It is nice to be able to > compare a software implementation with the hardware offload. >=20 > This commit tries to fix these issues, and provide a simple definition > of what is done by the forward engine: >=20 > * Receive a burst of packets, and for supported packet types: > * - modify the IPs > * - reprocess the checksum in SW or HW, depending on testpmd command li= ne > * configuration > * Then packets are transmitted on the output port. > * > * Supported packets are: > * Ether / (vlan) / IP|IP6 / UDP|TCP|SCTP . > * Ether / (vlan) / IP|IP6 / UDP / VxLAN / Ether / IP|IP6 / UDP|TCP|SCT= P > * > * The network parser supposes that the packet is contiguous, which may > * not be the case in real life. As I can see you removed code that sets up TX_PKT_IPV4 and TX_PKT_IPV6 of = ol_flags. I think that we need to keep it. The reason for that is: With FVL, to make HW TX checksum offload work, SW is responsible to provide= to the HW information about L3 header. Possible values are: =20 - IPv4 hdr with HW checksum calculation - IPV4 hdr (checksum done by SW) - IPV6 hdr=20 - unknown So let say to for the packet: ETHER_HDR/IPV6_HDR/TCP_HDR/DATA To request HW TCP checksum offload, SW have to provide to HW information t= hat it is a packet with IPV6 header (plus as for ixgbe: l2_hdr_len, l3_hdr_len, l4_type, l4_hdr_len). That's why TX_PKT_IPV4 and TX_PKT_IPV6 were introduced. Yes, it is a change in public API for HW TX offload, but I don't see any o= ther way we can overcome it (apart from make TX function itself to parse a packet, which is obviously n= ot a good choice). Note that existing apps working on existing HW (ixgbe/igb/em) are not affec= ted. Though apps that supposed to be run on FVL HW too have to follow new conven= tion. So I suggest we keep setting these flags in csumonly.c Apart from that , the patch looks good to me. And yes, we would need to change the the way we handle TX offload for tunn= elled packets.=20 Konstantin >=20 > Signed-off-by: Olivier Matz > --- > app/test-pmd/cmdline.c | 156 ++++++++--- > app/test-pmd/config.c | 13 +- > app/test-pmd/csumonly.c | 676 ++++++++++++++++++++++--------------------= ------ > app/test-pmd/testpmd.h | 17 +- > 4 files changed, 437 insertions(+), 425 deletions(-) >=20 > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c > index 4c3fc76..61e4340 100644 > --- a/app/test-pmd/cmdline.c > +++ b/app/test-pmd/cmdline.c > @@ -310,19 +310,19 @@ static void cmd_help_long_parsed(void *parsed_resul= t, > " Disable hardware insertion of a VLAN header in" > " packets sent on a port.\n\n" >=20 > - "tx_checksum set (mask) (port_id)\n" > - " Enable hardware insertion of checksum offload with" > - " the 8-bit mask, 0~0xff, in packets sent on a port.\n" > - " bit 0 - insert ip checksum offload if set\n" > - " bit 1 - insert udp checksum offload if set\n" > - " bit 2 - insert tcp checksum offload if set\n" > - " bit 3 - insert sctp checksum offload if set\n" > - " bit 4 - insert inner ip checksum offload if set\n" > - " bit 5 - insert inner udp checksum offload if set\n" > - " bit 6 - insert inner tcp checksum offload if set\n" > - " bit 7 - insert inner sctp checksum offload if set\n" > + "tx_cksum set (ip|udp|tcp|sctp|vxlan) (hw|sw) (port_id)\n" > + " Select hardware or software calculation of the" > + " checksum with when transmitting a packet using the" > + " csum forward engine.\n" > + " ip|udp|tcp|sctp always concern the inner layer.\n" > + " vxlan concerns the outer IP and UDP layer (in" > + " case the packet is recognized as a vxlan packet by" > + " the forward engine)\n" > " Please check the NIC datasheet for HW limits.\n\n" >=20 > + "tx_checksum show (port_id)\n" > + " Display tx checksum offload configuration\n\n" > + > "set fwd (%s)\n" > " Set packet forwarding mode.\n\n" >=20 > @@ -2738,48 +2738,131 @@ cmdline_parse_inst_t cmd_tx_vlan_reset =3D { >=20 >=20 > /* *** ENABLE HARDWARE INSERTION OF CHECKSUM IN TX PACKETS *** */ > -struct cmd_tx_cksum_set_result { > +struct cmd_tx_cksum_result { > cmdline_fixed_string_t tx_cksum; > - cmdline_fixed_string_t set; > - uint8_t cksum_mask; > + cmdline_fixed_string_t mode; > + cmdline_fixed_string_t proto; > + cmdline_fixed_string_t hwsw; > uint8_t port_id; > }; >=20 > static void > -cmd_tx_cksum_set_parsed(void *parsed_result, > +cmd_tx_cksum_parsed(void *parsed_result, > __attribute__((unused)) struct cmdline *cl, > __attribute__((unused)) void *data) > { > - struct cmd_tx_cksum_set_result *res =3D parsed_result; > + struct cmd_tx_cksum_result *res =3D parsed_result; > + int hw =3D 0; > + uint16_t ol_flags, mask =3D 0; > + struct rte_eth_dev_info dev_info; > + > + if (port_id_is_invalid(res->port_id)) { > + printf("invalid port %d\n", res->port_id); > + return; > + } >=20 > - tx_cksum_set(res->port_id, res->cksum_mask); > + if (!strcmp(res->mode, "set")) { > + > + if (!strcmp(res->hwsw, "hw")) > + hw =3D 1; > + > + if (!strcmp(res->proto, "ip")) { > + mask =3D TESTPMD_TX_OFFLOAD_IP_CKSUM; > + } else if (!strcmp(res->proto, "udp")) { > + mask =3D TESTPMD_TX_OFFLOAD_UDP_CKSUM; > + } else if (!strcmp(res->proto, "tcp")) { > + mask =3D TESTPMD_TX_OFFLOAD_TCP_CKSUM; > + } else if (!strcmp(res->proto, "sctp")) { > + mask =3D TESTPMD_TX_OFFLOAD_SCTP_CKSUM; > + } else if (!strcmp(res->proto, "vxlan")) { > + mask =3D TESTPMD_TX_OFFLOAD_VXLAN_CKSUM; > + } > + > + if (hw) > + ports[res->port_id].tx_ol_flags |=3D mask; > + else > + ports[res->port_id].tx_ol_flags &=3D (~mask); > + } > + > + ol_flags =3D ports[res->port_id].tx_ol_flags; > + printf("IP checksum offload is %s\n", > + (ol_flags & TESTPMD_TX_OFFLOAD_IP_CKSUM) ? "hw" : "sw"); > + printf("UDP checksum offload is %s\n", > + (ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw"); > + printf("TCP checksum offload is %s\n", > + (ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw"); > + printf("SCTP checksum offload is %s\n", > + (ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw"); > + printf("VxLAN checksum offload is %s\n", > + (ol_flags & TESTPMD_TX_OFFLOAD_VXLAN_CKSUM) ? "hw" : "sw"); > + > + /* display warnings if configuration is not supported by the NIC */ > + rte_eth_dev_info_get(res->port_id, &dev_info); > + if ((ol_flags & TESTPMD_TX_OFFLOAD_IP_CKSUM) && > + (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) =3D=3D 0) { > + printf("Warning: hardware IP checksum enabled but not " > + "supported by port %d\n", res->port_id); > + } > + if ((ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) && > + (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) =3D=3D 0) { > + printf("Warning: hardware UDP checksum enabled but not " > + "supported by port %d\n", res->port_id); > + } > + if ((ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) && > + (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) =3D=3D 0) { > + printf("Warning: hardware TCP checksum enabled but not " > + "supported by port %d\n", res->port_id); > + } > + if ((ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) && > + (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) =3D=3D 0) { > + printf("Warning: hardware SCTP checksum enabled but not " > + "supported by port %d\n", res->port_id); > + } > } >=20 > -cmdline_parse_token_string_t cmd_tx_cksum_set_tx_cksum =3D > - TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_set_result, > +cmdline_parse_token_string_t cmd_tx_cksum_tx_cksum =3D > + TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_result, > tx_cksum, "tx_checksum"); > -cmdline_parse_token_string_t cmd_tx_cksum_set_set =3D > - TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_set_result, > - set, "set"); > -cmdline_parse_token_num_t cmd_tx_cksum_set_cksum_mask =3D > - TOKEN_NUM_INITIALIZER(struct cmd_tx_cksum_set_result, > - cksum_mask, UINT8); > -cmdline_parse_token_num_t cmd_tx_cksum_set_portid =3D > - TOKEN_NUM_INITIALIZER(struct cmd_tx_cksum_set_result, > +cmdline_parse_token_string_t cmd_tx_cksum_mode =3D > + TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_result, > + mode, "set"); > +cmdline_parse_token_string_t cmd_tx_cksum_proto =3D > + TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_result, > + proto, "ip#tcp#udp#sctp#vxlan"); > +cmdline_parse_token_string_t cmd_tx_cksum_hwsw =3D > + TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_result, > + hwsw, "hw#sw"); > +cmdline_parse_token_num_t cmd_tx_cksum_portid =3D > + TOKEN_NUM_INITIALIZER(struct cmd_tx_cksum_result, > port_id, UINT8); >=20 > cmdline_parse_inst_t cmd_tx_cksum_set =3D { > - .f =3D cmd_tx_cksum_set_parsed, > + .f =3D cmd_tx_cksum_parsed, > + .data =3D NULL, > + .help_str =3D "enable/disable hardware calculation of L3/L4 checksum wh= en " > + "using csum forward engine: tx_cksum set ip|tcp|udp|sctp|vxlan hw|sw <= port>", > + .tokens =3D { > + (void *)&cmd_tx_cksum_tx_cksum, > + (void *)&cmd_tx_cksum_mode, > + (void *)&cmd_tx_cksum_proto, > + (void *)&cmd_tx_cksum_hwsw, > + (void *)&cmd_tx_cksum_portid, > + NULL, > + }, > +}; > + > +cmdline_parse_token_string_t cmd_tx_cksum_mode_show =3D > + TOKEN_STRING_INITIALIZER(struct cmd_tx_cksum_result, > + mode, "show"); > + > +cmdline_parse_inst_t cmd_tx_cksum_show =3D { > + .f =3D cmd_tx_cksum_parsed, > .data =3D NULL, > - .help_str =3D "enable hardware insertion of L3/L4checksum with a given = " > - "mask in packets sent on a port, the bit mapping is given as, Bit 0 for= ip, " > - "Bit 1 for UDP, Bit 2 for TCP, Bit 3 for SCTP, Bit 4 for inner ip, " > - "Bit 5 for inner UDP, Bit 6 for inner TCP, Bit 7 for inner SCTP", > + .help_str =3D "show checksum offload configuration: tx_cksum show ", > .tokens =3D { > - (void *)&cmd_tx_cksum_set_tx_cksum, > - (void *)&cmd_tx_cksum_set_set, > - (void *)&cmd_tx_cksum_set_cksum_mask, > - (void *)&cmd_tx_cksum_set_portid, > + (void *)&cmd_tx_cksum_tx_cksum, > + (void *)&cmd_tx_cksum_mode_show, > + (void *)&cmd_tx_cksum_portid, > NULL, > }, > }; > @@ -7796,6 +7879,7 @@ cmdline_parse_ctx_t main_ctx[] =3D { > (cmdline_parse_inst_t *)&cmd_tx_vlan_reset, > (cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid, > (cmdline_parse_inst_t *)&cmd_tx_cksum_set, > + (cmdline_parse_inst_t *)&cmd_tx_cksum_show, > (cmdline_parse_inst_t *)&cmd_link_flow_control_set, > (cmdline_parse_inst_t *)&cmd_link_flow_control_set_rx, > (cmdline_parse_inst_t *)&cmd_link_flow_control_set_tx, > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > index 34b6fdb..d093227 100644 > --- a/app/test-pmd/config.c > +++ b/app/test-pmd/config.c > @@ -32,7 +32,7 @@ > */ > /* BSD LICENSE > * > - * Copyright(c) 2013 6WIND. > + * Copyright 2013-2014 6WIND S.A. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions > @@ -1744,17 +1744,6 @@ set_qmap(portid_t port_id, uint8_t is_rx, uint16_t= queue_id, uint8_t map_value) > } >=20 > void > -tx_cksum_set(portid_t port_id, uint64_t ol_flags) > -{ > - uint64_t tx_ol_flags; > - if (port_id_is_invalid(port_id)) > - return; > - /* Clear last 8 bits and then set L3/4 checksum mask again */ > - tx_ol_flags =3D ports[port_id].tx_ol_flags & (~0x0FFull); > - ports[port_id].tx_ol_flags =3D ((ol_flags & 0xff) | tx_ol_flags); > -} > - > -void > fdir_add_signature_filter(portid_t port_id, uint8_t queue_id, > struct rte_fdir_filter *fdir_filter) > { > diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c > index 743094a..4d6f1ee 100644 > --- a/app/test-pmd/csumonly.c > +++ b/app/test-pmd/csumonly.c > @@ -2,6 +2,7 @@ > * BSD LICENSE > * > * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. > + * Copyright 2014 6WIND S.A. > * All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > @@ -73,13 +74,19 @@ > #include > #include "testpmd.h" >=20 > - > - > #define IP_DEFTTL 64 /* from RFC 1340. */ > #define IP_VERSION 0x40 > #define IP_HDRLEN 0x05 /* default IP header length =3D=3D five 32-bits = words. */ > #define IP_VHL_DEF (IP_VERSION | IP_HDRLEN) >=20 > +/* we cannot use htons() from arpa/inet.h due to name conflicts, and we > + * cannot use rte_cpu_to_be_16() on a constant in a switch/case */ > +#if __BYTE_ORDER =3D=3D __LITTLE_ENDIAN > +#define _htons(x) ((uint16_t)((((x) & 0x00ffU) << 8) | (((x) & 0xff00U) = >> 8))) > +#else > +#define _htons(x) (x) > +#endif > + > static inline uint16_t > get_16b_sum(uint16_t *ptr16, uint32_t nr) > { > @@ -112,7 +119,7 @@ get_ipv4_cksum(struct ipv4_hdr *ipv4_hdr) >=20 >=20 > static inline uint16_t > -get_ipv4_psd_sum (struct ipv4_hdr * ip_hdr) > +get_ipv4_psd_sum(struct ipv4_hdr *ip_hdr) > { > /* Pseudo Header for IPv4/UDP/TCP checksum */ > union ipv4_psd_header { > @@ -136,7 +143,7 @@ get_ipv4_psd_sum (struct ipv4_hdr * ip_hdr) > } >=20 > static inline uint16_t > -get_ipv6_psd_sum (struct ipv6_hdr * ip_hdr) > +get_ipv6_psd_sum(struct ipv6_hdr *ip_hdr) > { > /* Pseudo Header for IPv6/UDP/TCP checksum */ > union ipv6_psd_header { > @@ -158,6 +165,15 @@ get_ipv6_psd_sum (struct ipv6_hdr * ip_hdr) > return get_16b_sum(psd_hdr.u16_arr, sizeof(psd_hdr)); > } >=20 > +static uint16_t > +get_psd_sum(void *l3_hdr, uint16_t ethertype) > +{ > + if (ethertype =3D=3D _htons(ETHER_TYPE_IPv4)) > + return get_ipv4_psd_sum(l3_hdr); > + else /* assume ethertype =3D=3D ETHER_TYPE_IPv6 */ > + return get_ipv6_psd_sum(l3_hdr); > +} > + > static inline uint16_t > get_ipv4_udptcp_checksum(struct ipv4_hdr *ipv4_hdr, uint16_t *l4_hdr) > { > @@ -174,7 +190,6 @@ get_ipv4_udptcp_checksum(struct ipv4_hdr *ipv4_hdr, u= int16_t *l4_hdr) > if (cksum =3D=3D 0) > cksum =3D 0xffff; > return (uint16_t)cksum; > - > } >=20 > static inline uint16_t > @@ -196,48 +211,225 @@ get_ipv6_udptcp_checksum(struct ipv6_hdr *ipv6_hdr= , uint16_t *l4_hdr) > return (uint16_t)cksum; > } >=20 > +static uint16_t > +get_udptcp_checksum(void *l3_hdr, void *l4_hdr, uint16_t ethertype) > +{ > + if (ethertype =3D=3D _htons(ETHER_TYPE_IPv4)) > + return get_ipv4_udptcp_checksum(l3_hdr, l4_hdr); > + else /* assume ethertype =3D=3D ETHER_TYPE_IPv6 */ > + return get_ipv6_udptcp_checksum(l3_hdr, l4_hdr); > +} >=20 > /* > - * Forwarding of packets. Change the checksum field with HW or SW method= s > - * The HW/SW method selection depends on the ol_flags on every packet > + * Parse an ethernet header to fill the ethertype, l2_len, l3_len and > + * ipproto. This function is able to recognize IPv4/IPv6 with one option= al vlan > + * header. > + */ > +static void > +parse_ethernet(struct ether_hdr *eth_hdr, uint16_t *ethertype, uint16_t = *l2_len, > + uint16_t *l3_len, uint8_t *l4_proto) > +{ > + struct ipv4_hdr *ipv4_hdr; > + struct ipv6_hdr *ipv6_hdr; > + > + *l2_len =3D sizeof(struct ether_hdr); > + *ethertype =3D eth_hdr->ether_type; > + > + if (*ethertype =3D=3D _htons(ETHER_TYPE_VLAN)) { > + struct vlan_hdr *vlan_hdr =3D (struct vlan_hdr *)(eth_hdr + 1); > + > + *l2_len +=3D sizeof(struct vlan_hdr); > + *ethertype =3D vlan_hdr->eth_proto; > + } > + > + switch (*ethertype) { > + case _htons(ETHER_TYPE_IPv4): > + ipv4_hdr =3D (struct ipv4_hdr *) ((char *)eth_hdr + *l2_len); > + *l3_len =3D (ipv4_hdr->version_ihl & 0x0f) * 4; > + *l4_proto =3D ipv4_hdr->next_proto_id; > + break; > + case _htons(ETHER_TYPE_IPv6): > + ipv6_hdr =3D (struct ipv6_hdr *) ((char *)eth_hdr + *l2_len); > + *l3_len =3D sizeof(struct ipv6_hdr) ; > + *l4_proto =3D ipv6_hdr->proto; > + break; > + default: > + *l3_len =3D 0; > + *l4_proto =3D 0; > + break; > + } > +} > + > +/* modify the IPv4 or IPv4 source address of a packet */ > +static void > +change_ip_addresses(void *l3_hdr, uint16_t ethertype) > +{ > + struct ipv4_hdr *ipv4_hdr =3D l3_hdr; > + struct ipv6_hdr *ipv6_hdr =3D l3_hdr; > + > + if (ethertype =3D=3D _htons(ETHER_TYPE_IPv4)) { > + ipv4_hdr->src_addr =3D > + rte_cpu_to_be_32(rte_be_to_cpu_32(ipv4_hdr->src_addr) + 1); > + } > + else if (ethertype =3D=3D _htons(ETHER_TYPE_IPv6)) { > + ipv6_hdr->src_addr[15] =3D ipv6_hdr->src_addr[15] + 1; > + } > +} > + > +/* if possible, calculate the checksum of a packet in hw or sw, > + * depending on the testpmd command line configuration */ > +static uint64_t > +process_inner_cksums(void *l3_hdr, uint16_t ethertype, uint16_t l3_len, > + uint8_t l4_proto, uint16_t testpmd_ol_flags) > +{ > + struct ipv4_hdr *ipv4_hdr =3D l3_hdr; > + struct udp_hdr *udp_hdr; > + struct tcp_hdr *tcp_hdr; > + struct sctp_hdr *sctp_hdr; > + uint64_t ol_flags =3D 0; > + > + if (ethertype =3D=3D _htons(ETHER_TYPE_IPv4)) { > + ipv4_hdr =3D l3_hdr; > + ipv4_hdr->hdr_checksum =3D 0; > + > + if (testpmd_ol_flags & TESTPMD_TX_OFFLOAD_IP_CKSUM) > + ol_flags |=3D PKT_TX_IP_CKSUM; > + else > + ipv4_hdr->hdr_checksum =3D get_ipv4_cksum(ipv4_hdr); > + > + } > + else if (ethertype !=3D _htons(ETHER_TYPE_IPv6)) > + return 0; /* packet type not supported nothing to do */ > + > + if (l4_proto =3D=3D IPPROTO_UDP) { > + udp_hdr =3D (struct udp_hdr *)((char *)l3_hdr + l3_len); > + /* do not recalculate udp cksum if it was 0 */ > + if (udp_hdr->dgram_cksum !=3D 0) { > + udp_hdr->dgram_cksum =3D 0; > + if (testpmd_ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) { > + ol_flags |=3D PKT_TX_UDP_CKSUM; > + udp_hdr->dgram_cksum =3D get_psd_sum(l3_hdr, > + ethertype); > + } > + else { > + udp_hdr->dgram_cksum =3D > + get_udptcp_checksum(l3_hdr, udp_hdr, > + ethertype); > + } > + } > + } > + else if (l4_proto =3D=3D IPPROTO_TCP) { > + tcp_hdr =3D (struct tcp_hdr *)((char *)l3_hdr + l3_len); > + tcp_hdr->cksum =3D 0; > + if (testpmd_ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) { > + ol_flags |=3D PKT_TX_TCP_CKSUM; > + tcp_hdr->cksum =3D get_psd_sum(l3_hdr, ethertype); > + } > + else { > + tcp_hdr->cksum =3D > + get_udptcp_checksum(l3_hdr, tcp_hdr, ethertype); > + } > + } > + else if (l4_proto =3D=3D IPPROTO_SCTP) { > + sctp_hdr =3D (struct sctp_hdr *)((char *)l3_hdr + l3_len); > + sctp_hdr->cksum =3D 0; > + /* sctp payload must be a multiple of 4 to be > + * offloaded */ > + if ((testpmd_ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) && > + ((ipv4_hdr->total_length & 0x3) =3D=3D 0)) { > + ol_flags |=3D PKT_TX_SCTP_CKSUM; > + } > + else { > + /* XXX implement CRC32c, example available in > + * RFC3309 */ > + } > + } > + > + return ol_flags; > +} > + > +/* Calculate the checksum of outer header (only vxlan is supported, > + * meaning IP + UDP). The caller already checked that it's a vxlan > + * packet */ > +static uint64_t > +process_outer_cksums(void *outer_l3_hdr, uint16_t outer_ethertype, > + uint16_t outer_l3_len, uint16_t testpmd_ol_flags) > +{ > + struct ipv4_hdr *ipv4_hdr =3D outer_l3_hdr; > + struct ipv6_hdr *ipv6_hdr =3D outer_l3_hdr; > + struct udp_hdr *udp_hdr; > + uint64_t ol_flags =3D 0; > + > + if (testpmd_ol_flags & TESTPMD_TX_OFFLOAD_VXLAN_CKSUM) > + ol_flags |=3D PKT_TX_VXLAN_CKSUM; > + > + if (outer_ethertype =3D=3D _htons(ETHER_TYPE_IPv4)) { > + ipv4_hdr->hdr_checksum =3D 0; > + > + if ((testpmd_ol_flags & TESTPMD_TX_OFFLOAD_VXLAN_CKSUM) =3D=3D 0) > + ipv4_hdr->hdr_checksum =3D get_ipv4_cksum(ipv4_hdr); > + } > + > + udp_hdr =3D (struct udp_hdr *)((char *)outer_l3_hdr + outer_l3_len); > + /* do not recalculate udp cksum if it was 0 */ > + if (udp_hdr->dgram_cksum !=3D 0) { > + udp_hdr->dgram_cksum =3D 0; > + if ((testpmd_ol_flags & TESTPMD_TX_OFFLOAD_VXLAN_CKSUM) =3D=3D 0) { > + if (outer_ethertype =3D=3D _htons(ETHER_TYPE_IPv4)) > + udp_hdr->dgram_cksum =3D > + get_ipv4_udptcp_checksum(ipv4_hdr, > + (uint16_t *)udp_hdr); > + else > + udp_hdr->dgram_cksum =3D > + get_ipv6_udptcp_checksum(ipv6_hdr, > + (uint16_t *)udp_hdr); > + } > + } > + > + return ol_flags; > +} > + > +/* > + * Receive a burst of packets, and for each packet: > + * - parse packet, and try to recognize a supported packet type (1) > + * - if it's not a supported packet type, don't touch the packet, else: > + * - modify the IPs in inner headers and in outer headers if any > + * - reprocess the checksum of all supported layers. This is done in SW > + * or HW, depending on testpmd command line configuration > + * Then transmit packets on the output port. > + * > + * (1) Supported packets are: > + * Ether / (vlan) / IP|IP6 / UDP|TCP|SCTP . > + * Ether / (vlan) / outer IP|IP6 / outer UDP / VxLAN / Ether / IP|IP6 = / > + * UDP|TCP|SCTP > + * > + * The testpmd command line for this forward engine sets the flags > + * TESTPMD_TX_OFFLOAD_* in ports[tx_port].tx_ol_flags. They control > + * wether a checksum must be calculated in software or in hardware. The > + * IP, UDP, TCP and SCTP flags always concern the inner layer. The > + * VxLAN flag concerns the outer IP and UDP layer (if packet is > + * recognized as a vxlan packet). > */ > static void > pkt_burst_checksum_forward(struct fwd_stream *fs) > { > - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; > - struct rte_port *txp; > - struct rte_mbuf *mb; > + struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; > + struct rte_port *txp; > + struct rte_mbuf *m; > struct ether_hdr *eth_hdr; > - struct ipv4_hdr *ipv4_hdr; > - struct ether_hdr *inner_eth_hdr; > - struct ipv4_hdr *inner_ipv4_hdr =3D NULL; > - struct ipv6_hdr *ipv6_hdr; > - struct ipv6_hdr *inner_ipv6_hdr =3D NULL; > - struct udp_hdr *udp_hdr; > - struct udp_hdr *inner_udp_hdr; > - struct tcp_hdr *tcp_hdr; > - struct tcp_hdr *inner_tcp_hdr; > - struct sctp_hdr *sctp_hdr; > - struct sctp_hdr *inner_sctp_hdr; > - > + void *l3_hdr =3D NULL, *outer_l3_hdr =3D NULL; /* can be IPv4 or IPv6 *= / > + struct udp_hdr *udp_hdr; > uint16_t nb_rx; > uint16_t nb_tx; > uint16_t i; > uint64_t ol_flags; > - uint64_t pkt_ol_flags; > - uint64_t tx_ol_flags; > - uint16_t l4_proto; > - uint16_t inner_l4_proto =3D 0; > - uint16_t eth_type; > - uint8_t l2_len; > - uint8_t l3_len; > - uint8_t inner_l2_len =3D 0; > - uint8_t inner_l3_len =3D 0; > - > + uint16_t testpmd_ol_flags; > + uint8_t l4_proto; > + uint16_t ethertype =3D 0, outer_ethertype =3D 0; > + uint16_t l2_len =3D 0, l3_len =3D 0, outer_l2_len =3D 0, outer_l3_len = =3D 0; > + int tunnel =3D 0; > uint32_t rx_bad_ip_csum; > uint32_t rx_bad_l4_csum; > - uint8_t ipv4_tunnel; > - uint8_t ipv6_tunnel; >=20 > #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES > uint64_t start_tsc; > @@ -249,9 +441,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) > start_tsc =3D rte_rdtsc(); > #endif >=20 > - /* > - * Receive a burst of packets and forward them. > - */ > + /* receive a burst of packet */ > nb_rx =3D rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, > nb_pkt_per_burst); > if (unlikely(nb_rx =3D=3D 0)) > @@ -265,348 +455,107 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) > rx_bad_l4_csum =3D 0; >=20 > txp =3D &ports[fs->tx_port]; > - tx_ol_flags =3D txp->tx_ol_flags; > + testpmd_ol_flags =3D txp->tx_ol_flags; >=20 > for (i =3D 0; i < nb_rx; i++) { >=20 > - mb =3D pkts_burst[i]; > - l2_len =3D sizeof(struct ether_hdr); > - pkt_ol_flags =3D mb->ol_flags; > - ol_flags =3D (pkt_ol_flags & (~PKT_TX_L4_MASK)); > - ipv4_tunnel =3D (pkt_ol_flags & PKT_RX_TUNNEL_IPV4_HDR) ? > - 1 : 0; > - ipv6_tunnel =3D (pkt_ol_flags & PKT_RX_TUNNEL_IPV6_HDR) ? > - 1 : 0; > - eth_hdr =3D rte_pktmbuf_mtod(mb, struct ether_hdr *); > - eth_type =3D rte_be_to_cpu_16(eth_hdr->ether_type); > - if (eth_type =3D=3D ETHER_TYPE_VLAN) { > - /* Only allow single VLAN label here */ > - l2_len +=3D sizeof(struct vlan_hdr); > - eth_type =3D rte_be_to_cpu_16(*(uint16_t *) > - ((uintptr_t)ð_hdr->ether_type + > - sizeof(struct vlan_hdr))); > + ol_flags =3D 0; > + tunnel =3D 0; > + m =3D pkts_burst[i]; > + > + /* Update the L3/L4 checksum error packet statistics */ > + rx_bad_ip_csum +=3D ((m->ol_flags & PKT_RX_IP_CKSUM_BAD) !=3D 0); > + rx_bad_l4_csum +=3D ((m->ol_flags & PKT_RX_L4_CKSUM_BAD) !=3D 0); > + > + /* step 1: dissect packet, parsing optional vlan, ip4/ip6, vxlan > + * and inner headers */ > + > + eth_hdr =3D rte_pktmbuf_mtod(m, struct ether_hdr *); > + parse_ethernet(eth_hdr, ðertype, &l2_len, &l3_len, &l4_proto); > + l3_hdr =3D (char *)eth_hdr + l2_len; > + > + /* check if it's a supported tunnel (only vxlan for now) */ > + if (l4_proto =3D=3D IPPROTO_UDP) { > + udp_hdr =3D (struct udp_hdr *)((char *)l3_hdr + l3_len); > + > + /* currently, this flag is set by i40e only if the > + * packet is vxlan */ > + if (((m->ol_flags & PKT_RX_TUNNEL_IPV4_HDR) || > + (m->ol_flags & PKT_RX_TUNNEL_IPV6_HDR))) > + tunnel =3D 1; > + /* else check udp destination port, 4789 is the default > + * vxlan port (rfc7348) */ > + else if (udp_hdr->dst_port =3D=3D _htons(4789)) > + tunnel =3D 1; > + > + if (tunnel =3D=3D 1) { > + outer_ethertype =3D ethertype; > + outer_l2_len =3D l2_len; > + outer_l3_len =3D l3_len; > + outer_l3_hdr =3D l3_hdr; > + > + eth_hdr =3D (struct ether_hdr *)((char *)udp_hdr + > + sizeof(struct udp_hdr) + > + sizeof(struct vxlan_hdr)); > + > + parse_ethernet(eth_hdr, ðertype, &l2_len, > + &l3_len, &l4_proto); > + l3_hdr =3D (char *)eth_hdr + l2_len; > + } > } >=20 > - /* Update the L3/L4 checksum error packet count */ > - rx_bad_ip_csum +=3D (uint16_t) ((pkt_ol_flags & PKT_RX_IP_CKSUM_BAD) != =3D 0); > - rx_bad_l4_csum +=3D (uint16_t) ((pkt_ol_flags & PKT_RX_L4_CKSUM_BAD) != =3D 0); > - > - /* > - * Try to figure out L3 packet type by SW. > - */ > - if ((pkt_ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_IPV4_HDR_EXT | > - PKT_RX_IPV6_HDR | PKT_RX_IPV6_HDR_EXT)) =3D=3D 0) { > - if (eth_type =3D=3D ETHER_TYPE_IPv4) > - pkt_ol_flags |=3D PKT_RX_IPV4_HDR; > - else if (eth_type =3D=3D ETHER_TYPE_IPv6) > - pkt_ol_flags |=3D PKT_RX_IPV6_HDR; > - } > + /* step 2: change all source IPs (v4 or v6) so we need > + * to recompute the chksums even if they were correct */ >=20 > - /* > - * Simplify the protocol parsing > - * Assuming the incoming packets format as > - * Ethernet2 + optional single VLAN > - * + ipv4 or ipv6 > - * + udp or tcp or sctp or others > - */ > - if (pkt_ol_flags & (PKT_RX_IPV4_HDR | PKT_RX_TUNNEL_IPV4_HDR)) { > + change_ip_addresses(l3_hdr, ethertype); > + if (tunnel =3D=3D 1) > + change_ip_addresses(outer_l3_hdr, outer_ethertype); >=20 > - /* Do not support ipv4 option field */ > - l3_len =3D sizeof(struct ipv4_hdr) ; > + /* step 3: depending on user command line configuration, > + * recompute checksum either in software or flag the > + * mbuf to offload the calculation to the NIC */ >=20 > - ipv4_hdr =3D (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len); > + /* process checksums of inner headers first */ > + ol_flags |=3D process_inner_cksums(l3_hdr, ethertype, > + l3_len, l4_proto, testpmd_ol_flags); >=20 > - l4_proto =3D ipv4_hdr->next_proto_id; > + /* Then process outer headers if any. Note that the software > + * checksum will be wrong if one of the inner checksums is > + * processed in hardware. */ > + if (tunnel =3D=3D 1) { > + ol_flags |=3D process_outer_cksums(outer_l3_hdr, > + outer_ethertype, outer_l3_len, testpmd_ol_flags); > + } >=20 > - /* Do not delete, this is required by HW*/ > - ipv4_hdr->hdr_checksum =3D 0; > + /* step 4: fill the mbuf meta data (flags and header lengths) */ >=20 > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_IP_CKSUM) { > - /* HW checksum */ > - ol_flags |=3D PKT_TX_IP_CKSUM; > + if (tunnel =3D=3D 1) { > + if (testpmd_ol_flags & TESTPMD_TX_OFFLOAD_VXLAN_CKSUM) { > + m->l2_len =3D outer_l2_len; > + m->l3_len =3D outer_l3_len; > + m->inner_l2_len =3D l2_len; > + m->inner_l3_len =3D l3_len; > } > else { > - ol_flags |=3D PKT_TX_IPV4; > - /* SW checksum calculation */ > - ipv4_hdr->src_addr++; > - ipv4_hdr->hdr_checksum =3D get_ipv4_cksum(ipv4_hdr); > + /* if we don't do vxlan cksum in hw, > + outer checksum will be wrong because > + we changed the ip, but it shows that > + we can process the inner header cksum > + in the nic */ > + m->l2_len =3D outer_l2_len + outer_l3_len + > + sizeof(struct udp_hdr) + > + sizeof(struct vxlan_hdr) + l2_len; > + m->l3_len =3D l3_len; > } > - > - if (l4_proto =3D=3D IPPROTO_UDP) { > - udp_hdr =3D (struct udp_hdr*) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len); > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) { > - /* HW Offload */ > - ol_flags |=3D PKT_TX_UDP_CKSUM; > - if (ipv4_tunnel) > - udp_hdr->dgram_cksum =3D 0; > - else > - /* Pseudo header sum need be set properly */ > - udp_hdr->dgram_cksum =3D > - get_ipv4_psd_sum(ipv4_hdr); > - } > - else { > - /* SW Implementation, clear checksum field first */ > - udp_hdr->dgram_cksum =3D 0; > - udp_hdr->dgram_cksum =3D get_ipv4_udptcp_checksum(ipv4_hdr, > - (uint16_t *)udp_hdr); > - } > - > - if (ipv4_tunnel) { > - > - uint16_t len; > - > - /* Check if inner L3/L4 checkum flag is set */ > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_CKSUM_MASK) > - ol_flags |=3D PKT_TX_VXLAN_CKSUM; > - > - inner_l2_len =3D sizeof(struct ether_hdr); > - inner_eth_hdr =3D (struct ether_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len > - + ETHER_VXLAN_HLEN); > - > - eth_type =3D rte_be_to_cpu_16(inner_eth_hdr->ether_type); > - if (eth_type =3D=3D ETHER_TYPE_VLAN) { > - inner_l2_len +=3D sizeof(struct vlan_hdr); > - eth_type =3D rte_be_to_cpu_16(*(uint16_t *) > - ((uintptr_t)ð_hdr->ether_type + > - sizeof(struct vlan_hdr))); > - } > - > - len =3D l2_len + l3_len + ETHER_VXLAN_HLEN + inner_l2_len; > - if (eth_type =3D=3D ETHER_TYPE_IPv4) { > - inner_l3_len =3D sizeof(struct ipv4_hdr); > - inner_ipv4_hdr =3D (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len); > - inner_l4_proto =3D inner_ipv4_hdr->next_proto_id; > - > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_IP_CKSUM) { > - > - /* Do not delete, this is required by HW*/ > - inner_ipv4_hdr->hdr_checksum =3D 0; > - ol_flags |=3D PKT_TX_IPV4_CSUM; > - } > - > - } else if (eth_type =3D=3D ETHER_TYPE_IPv6) { > - inner_l3_len =3D sizeof(struct ipv6_hdr); > - inner_ipv6_hdr =3D (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len); > - inner_l4_proto =3D inner_ipv6_hdr->proto; > - } > - if ((inner_l4_proto =3D=3D IPPROTO_UDP) && > - (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_UDP_CKSUM)) { > - > - /* HW Offload */ > - ol_flags |=3D PKT_TX_UDP_CKSUM; > - inner_udp_hdr =3D (struct udp_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len + inner_l3_len); > - if (eth_type =3D=3D ETHER_TYPE_IPv4) > - inner_udp_hdr->dgram_cksum =3D get_ipv4_psd_sum(inner_ipv4_hdr); > - else if (eth_type =3D=3D ETHER_TYPE_IPv6) > - inner_udp_hdr->dgram_cksum =3D get_ipv6_psd_sum(inner_ipv6_hdr); > - > - } else if ((inner_l4_proto =3D=3D IPPROTO_TCP) && > - (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_TCP_CKSUM)) { > - /* HW Offload */ > - ol_flags |=3D PKT_TX_TCP_CKSUM; > - inner_tcp_hdr =3D (struct tcp_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len + inner_l3_len); > - if (eth_type =3D=3D ETHER_TYPE_IPv4) > - inner_tcp_hdr->cksum =3D get_ipv4_psd_sum(inner_ipv4_hdr); > - else if (eth_type =3D=3D ETHER_TYPE_IPv6) > - inner_tcp_hdr->cksum =3D get_ipv6_psd_sum(inner_ipv6_hdr); > - } else if ((inner_l4_proto =3D=3D IPPROTO_SCTP) && > - (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_SCTP_CKSUM)) { > - /* HW Offload */ > - ol_flags |=3D PKT_TX_SCTP_CKSUM; > - inner_sctp_hdr =3D (struct sctp_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len + inner_l3_len); > - inner_sctp_hdr->cksum =3D 0; > - } > - > - } > - > - } else if (l4_proto =3D=3D IPPROTO_TCP) { > - tcp_hdr =3D (struct tcp_hdr*) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len); > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) { > - ol_flags |=3D PKT_TX_TCP_CKSUM; > - tcp_hdr->cksum =3D get_ipv4_psd_sum(ipv4_hdr); > - } > - else { > - tcp_hdr->cksum =3D 0; > - tcp_hdr->cksum =3D get_ipv4_udptcp_checksum(ipv4_hdr, > - (uint16_t*)tcp_hdr); > - } > - } else if (l4_proto =3D=3D IPPROTO_SCTP) { > - sctp_hdr =3D (struct sctp_hdr*) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len); > - > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) { > - ol_flags |=3D PKT_TX_SCTP_CKSUM; > - sctp_hdr->cksum =3D 0; > - > - /* Sanity check, only number of 4 bytes supported */ > - if ((rte_be_to_cpu_16(ipv4_hdr->total_length) % 4) !=3D 0) > - printf("sctp payload must be a multiple " > - "of 4 bytes for checksum offload"); > - } > - else { > - sctp_hdr->cksum =3D 0; > - /* CRC32c sample code available in RFC3309 */ > - } > - } > - /* End of L4 Handling*/ > - } else if (pkt_ol_flags & (PKT_RX_IPV6_HDR | PKT_RX_TUNNEL_IPV6_HDR)) = { > - ipv6_hdr =3D (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len); > - l3_len =3D sizeof(struct ipv6_hdr) ; > - l4_proto =3D ipv6_hdr->proto; > - ol_flags |=3D PKT_TX_IPV6; > - > - if (l4_proto =3D=3D IPPROTO_UDP) { > - udp_hdr =3D (struct udp_hdr*) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len); > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) { > - /* HW Offload */ > - ol_flags |=3D PKT_TX_UDP_CKSUM; > - if (ipv6_tunnel) > - udp_hdr->dgram_cksum =3D 0; > - else > - udp_hdr->dgram_cksum =3D > - get_ipv6_psd_sum(ipv6_hdr); > - } > - else { > - /* SW Implementation */ > - /* checksum field need be clear first */ > - udp_hdr->dgram_cksum =3D 0; > - udp_hdr->dgram_cksum =3D get_ipv6_udptcp_checksum(ipv6_hdr, > - (uint16_t *)udp_hdr); > - } > - > - if (ipv6_tunnel) { > - > - uint16_t len; > - > - /* Check if inner L3/L4 checksum flag is set */ > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_CKSUM_MASK) > - ol_flags |=3D PKT_TX_VXLAN_CKSUM; > - > - inner_l2_len =3D sizeof(struct ether_hdr); > - inner_eth_hdr =3D (struct ether_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len + ETHER_VXLAN_HLEN); > - eth_type =3D rte_be_to_cpu_16(inner_eth_hdr->ether_type); > - > - if (eth_type =3D=3D ETHER_TYPE_VLAN) { > - inner_l2_len +=3D sizeof(struct vlan_hdr); > - eth_type =3D rte_be_to_cpu_16(*(uint16_t *) > - ((uintptr_t)ð_hdr->ether_type + > - sizeof(struct vlan_hdr))); > - } > - > - len =3D l2_len + l3_len + ETHER_VXLAN_HLEN + inner_l2_len; > - > - if (eth_type =3D=3D ETHER_TYPE_IPv4) { > - inner_l3_len =3D sizeof(struct ipv4_hdr); > - inner_ipv4_hdr =3D (struct ipv4_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len); > - inner_l4_proto =3D inner_ipv4_hdr->next_proto_id; > - > - /* HW offload */ > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_IP_CKSUM) { > - > - /* Do not delete, this is required by HW*/ > - inner_ipv4_hdr->hdr_checksum =3D 0; > - ol_flags |=3D PKT_TX_IPV4_CSUM; > - } > - } else if (eth_type =3D=3D ETHER_TYPE_IPv6) { > - inner_l3_len =3D sizeof(struct ipv6_hdr); > - inner_ipv6_hdr =3D (struct ipv6_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len); > - inner_l4_proto =3D inner_ipv6_hdr->proto; > - } > - > - if ((inner_l4_proto =3D=3D IPPROTO_UDP) && > - (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_UDP_CKSUM)) { > - inner_udp_hdr =3D (struct udp_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len + inner_l3_len); > - /* HW offload */ > - ol_flags |=3D PKT_TX_UDP_CKSUM; > - inner_udp_hdr->dgram_cksum =3D 0; > - if (eth_type =3D=3D ETHER_TYPE_IPv4) > - inner_udp_hdr->dgram_cksum =3D get_ipv4_psd_sum(inner_ipv4_hdr); > - else if (eth_type =3D=3D ETHER_TYPE_IPv6) > - inner_udp_hdr->dgram_cksum =3D get_ipv6_psd_sum(inner_ipv6_hdr); > - } else if ((inner_l4_proto =3D=3D IPPROTO_TCP) && > - (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_TCP_CKSUM)) { > - /* HW offload */ > - ol_flags |=3D PKT_TX_TCP_CKSUM; > - inner_tcp_hdr =3D (struct tcp_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len + inner_l3_len); > - > - if (eth_type =3D=3D ETHER_TYPE_IPv4) > - inner_tcp_hdr->cksum =3D get_ipv4_psd_sum(inner_ipv4_hdr); > - else if (eth_type =3D=3D ETHER_TYPE_IPv6) > - inner_tcp_hdr->cksum =3D get_ipv6_psd_sum(inner_ipv6_hdr); > - > - } else if ((inner_l4_proto =3D=3D IPPROTO_SCTP) && > - (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_SCTP_CKSUM)) { > - /* HW offload */ > - ol_flags |=3D PKT_TX_SCTP_CKSUM; > - inner_sctp_hdr =3D (struct sctp_hdr *) (rte_pktmbuf_mtod(mb, > - unsigned char *) + len + inner_l3_len); > - inner_sctp_hdr->cksum =3D 0; > - } > - > - } > - > - } > - else if (l4_proto =3D=3D IPPROTO_TCP) { > - tcp_hdr =3D (struct tcp_hdr*) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len); > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) { > - ol_flags |=3D PKT_TX_TCP_CKSUM; > - tcp_hdr->cksum =3D get_ipv6_psd_sum(ipv6_hdr); > - } > - else { > - tcp_hdr->cksum =3D 0; > - tcp_hdr->cksum =3D get_ipv6_udptcp_checksum(ipv6_hdr, > - (uint16_t*)tcp_hdr); > - } > - } > - else if (l4_proto =3D=3D IPPROTO_SCTP) { > - sctp_hdr =3D (struct sctp_hdr*) (rte_pktmbuf_mtod(mb, > - unsigned char *) + l2_len + l3_len); > - > - if (tx_ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) { > - ol_flags |=3D PKT_TX_SCTP_CKSUM; > - sctp_hdr->cksum =3D 0; > - /* Sanity check, only number of 4 bytes supported by HW */ > - if ((rte_be_to_cpu_16(ipv6_hdr->payload_len) % 4) !=3D 0) > - printf("sctp payload must be a multiple " > - "of 4 bytes for checksum offload"); > - } > - else { > - /* CRC32c sample code available in RFC3309 */ > - sctp_hdr->cksum =3D 0; > - } > - } else { > - printf("Test flow control for 1G PMD \n"); > - } > - /* End of L6 Handling*/ > - } > - else { > - l3_len =3D 0; > - printf("Unhandled packet type: %#hx\n", eth_type); > + } else { > + /* this is only useful if an offload flag is > + * set, but it does not hurt to fill it in any > + * case */ > + m->l2_len =3D l2_len; > + m->l3_len =3D l3_len; > } > + m->ol_flags =3D ol_flags; >=20 > - /* Combine the packet header write. VLAN is not consider here */ > - mb->l2_len =3D l2_len; > - mb->l3_len =3D l3_len; > - mb->inner_l2_len =3D inner_l2_len; > - mb->inner_l3_len =3D inner_l3_len; > - mb->ol_flags =3D ol_flags; > } > nb_tx =3D rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx= ); > fs->tx_packets +=3D nb_tx; > @@ -629,7 +578,6 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) > #endif > } >=20 > - > struct fwd_engine csum_fwd_engine =3D { > .fwd_mode_name =3D "csum", > .port_fwd_begin =3D NULL, > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h > index 82af2bd..c753d37 100644 > --- a/app/test-pmd/testpmd.h > +++ b/app/test-pmd/testpmd.h > @@ -131,18 +131,11 @@ struct fwd_stream { > #define TESTPMD_TX_OFFLOAD_TCP_CKSUM 0x0004 > /** Offload SCTP checksum in csum forward engine */ > #define TESTPMD_TX_OFFLOAD_SCTP_CKSUM 0x0008 > -/** Offload inner IP checksum in csum forward engine */ > -#define TESTPMD_TX_OFFLOAD_INNER_IP_CKSUM 0x0010 > -/** Offload inner UDP checksum in csum forward engine */ > -#define TESTPMD_TX_OFFLOAD_INNER_UDP_CKSUM 0x0020 > -/** Offload inner TCP checksum in csum forward engine */ > -#define TESTPMD_TX_OFFLOAD_INNER_TCP_CKSUM 0x0040 > -/** Offload inner SCTP checksum in csum forward engine */ > -#define TESTPMD_TX_OFFLOAD_INNER_SCTP_CKSUM 0x0080 > -/** Offload inner IP checksum mask */ > -#define TESTPMD_TX_OFFLOAD_INNER_CKSUM_MASK 0x00F0 > +/** Offload VxLAN checksum in csum forward engine */ > +#define TESTPMD_TX_OFFLOAD_VXLAN_CKSUM 0x0010 > /** Insert VLAN header in forward engine */ > -#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0100 > +#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0020 > + > /** > * The data structure associated with each port. > */ > @@ -510,8 +503,6 @@ void tx_vlan_pvid_set(portid_t port_id, uint16_t vlan= _id, int on); >=20 > void set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_= t map_value); >=20 > -void tx_cksum_set(portid_t port_id, uint64_t ol_flags); > - > void set_verbose_level(uint16_t vb_level); > void set_tx_pkt_segments(unsigned *seg_lengths, unsigned nb_segs); > void set_nb_pkt_per_burst(uint16_t pkt_burst); > -- > 2.1.0