From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id C08E6374F for ; Mon, 19 Sep 2016 15:00:00 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 19 Sep 2016 06:00:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,361,1470726000"; d="scan'208";a="1058911630" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by fmsmga002.fm.intel.com with ESMTP; 19 Sep 2016 05:59:58 -0700 Received: from irsmsx155.ger.corp.intel.com (163.33.192.3) by IRSMSX151.ger.corp.intel.com (163.33.192.59) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 19 Sep 2016 13:59:58 +0100 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.196]) by irsmsx155.ger.corp.intel.com ([169.254.14.133]) with mapi id 14.03.0248.002; Mon, 19 Sep 2016 13:59:57 +0100 From: "Ananyev, Konstantin" To: "Kulasek, TomaszX" , "dev@dpdk.org" CC: "jerin.jacob@caviumnetworks.com" Thread-Topic: [dpdk-dev] [PATCH v2 6/6] testpmd: add txprep engine Thread-Index: AQHSDQVZImnd/ccxWUO/cQwjdXNHhaCA0PMQ Date: Mon, 19 Sep 2016 12:59:57 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583F0B5827@irsmsx105.ger.corp.intel.com> References: <1472228578-6980-1-git-send-email-tomaszx.kulasek@intel.com> <1473691487-10032-1-git-send-email-tomaszx.kulasek@intel.com> <1473691487-10032-7-git-send-email-tomaszx.kulasek@intel.com> In-Reply-To: <1473691487-10032-7-git-send-email-tomaszx.kulasek@intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2 6/6] testpmd: add txprep engine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2016 13:00:01 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Tomasz Kulasek > Sent: Monday, September 12, 2016 3:45 PM > To: dev@dpdk.org > Cc: jerin.jacob@caviumnetworks.com > Subject: [dpdk-dev] [PATCH v2 6/6] testpmd: add txprep engine >=20 > This patch adds txprep engine to the testpmd application. >=20 > Txprep engine is intended to verify Tx preparation functionality implemen= ted in pmd driver. >=20 > It's based on the default "io" engine with the folowing changes: > - Tx HW offloads are reset in incoming packet, > - burst is passed to the Tx preparation function before tx burst, > - added "txsplit" and "tso" functionality for outgoing packets. Do we really need whole new mode with headers parsing and packet splitting? Can't we just modify testpmd csumonly mode to use tx_prep() instead? Konstantin >=20 > Signed-off-by: Tomasz Kulasek > --- > app/test-pmd/Makefile | 3 +- > app/test-pmd/testpmd.c | 3 + > app/test-pmd/testpmd.h | 4 +- > app/test-pmd/txprep.c | 412 ++++++++++++++++++++++++++++++++++++++++++= ++++++ > 4 files changed, 420 insertions(+), 2 deletions(-) create mode 100644 a= pp/test-pmd/txprep.c >=20 > diff --git a/app/test-pmd/Makefile b/app/test-pmd/Makefile index 2a0b5a5.= .3f9ad1c 100644 > --- a/app/test-pmd/Makefile > +++ b/app/test-pmd/Makefile > @@ -1,6 +1,6 @@ > # BSD LICENSE > # > -# Copyright(c) 2010-2015 Intel Corporation. All rights reserved. > +# Copyright(c) 2010-2016 Intel Corporation. All rights reserved. > # All rights reserved. > # > # Redistribution and use in source and binary forms, with or without > @@ -49,6 +49,7 @@ SRCS-y +=3D parameters.c > SRCS-$(CONFIG_RTE_LIBRTE_CMDLINE) +=3D cmdline.c SRCS-y +=3D config.c = SRCS-y +=3D iofwd.c > +SRCS-$(CONFIG_RTE_ETHDEV_TX_PREP) +=3D txprep.c > SRCS-y +=3D macfwd.c > SRCS-y +=3D macswap.c > SRCS-y +=3D flowgen.c > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 142897= 4..9b6c475 100644 > --- a/app/test-pmd/testpmd.c > +++ b/app/test-pmd/testpmd.c > @@ -152,6 +152,9 @@ struct fwd_engine * fwd_engines[] =3D { > &rx_only_engine, > &tx_only_engine, > &csum_fwd_engine, > +#ifdef RTE_ETHDEV_TX_PREP > + &txprep_fwd_engine, > +#endif > &icmp_echo_engine, > #ifdef RTE_LIBRTE_IEEE1588 > &ieee1588_fwd_engine, > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 2b281c= c..f800846 100644 > --- a/app/test-pmd/testpmd.h > +++ b/app/test-pmd/testpmd.h > @@ -239,7 +239,9 @@ extern struct fwd_engine icmp_echo_engine; #ifdef RT= E_LIBRTE_IEEE1588 extern struct fwd_engine > ieee1588_fwd_engine; #endif > - > +#ifdef RTE_ETHDEV_TX_PREP > +extern struct fwd_engine txprep_fwd_engine; #endif > extern struct fwd_engine * fwd_engines[]; /**< NULL terminated array. */ >=20 > /** > diff --git a/app/test-pmd/txprep.c b/app/test-pmd/txprep.c new file mode = 100644 index 0000000..688927e > --- /dev/null > +++ b/app/test-pmd/txprep.c > @@ -0,0 +1,412 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyrig= ht > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS F= OR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGH= T > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTA= L, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF US= E, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON A= NY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE U= SE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE= . > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "testpmd.h" > + > +/* We cannot use rte_cpu_to_be_16() on a constant in a switch/case */ > +#if RTE_BYTE_ORDER =3D=3D RTE_LITTLE_ENDIAN #define _htons(x) > +((uint16_t)((((x) & 0x00ffU) << 8) | (((x) & 0xff00U) >> 8))) #else > +#define _htons(x) (x) #endif > + > +/* > + * Helper function. > + * Performs actual copying. > + * Returns number of segments in the destination mbuf on success, > + * or negative error code on failure. > + */ > +static int > +mbuf_copy_split(const struct rte_mbuf *ms, struct rte_mbuf *md[], > + uint16_t seglen[], uint8_t nb_seg) > +{ > + uint32_t dlen, slen, tlen; > + uint32_t i, len; > + const struct rte_mbuf *m; > + const uint8_t *src; > + uint8_t *dst; > + > + dlen =3D 0; > + slen =3D 0; > + tlen =3D 0; > + > + dst =3D NULL; > + src =3D NULL; > + > + m =3D ms; > + i =3D 0; > + while (ms !=3D NULL && i !=3D nb_seg) { > + > + if (slen =3D=3D 0) { > + slen =3D rte_pktmbuf_data_len(ms); > + src =3D rte_pktmbuf_mtod(ms, const uint8_t *); > + } > + > + if (dlen =3D=3D 0) { > + dlen =3D RTE_MIN(seglen[i], slen); > + md[i]->data_len =3D dlen; > + md[i]->next =3D (i + 1 =3D=3D nb_seg) ? NULL : md[i + 1]; > + dst =3D rte_pktmbuf_mtod(md[i], uint8_t *); > + } > + > + len =3D RTE_MIN(slen, dlen); > + memcpy(dst, src, len); > + tlen +=3D len; > + slen -=3D len; > + dlen -=3D len; > + src +=3D len; > + dst +=3D len; > + > + if (slen =3D=3D 0) > + ms =3D ms->next; > + if (dlen =3D=3D 0) > + i++; > + } > + > + if (ms !=3D NULL) > + return -ENOBUFS; > + else if (tlen !=3D m->pkt_len) > + return -EINVAL; > + > + md[0]->nb_segs =3D nb_seg; > + md[0]->pkt_len =3D tlen; > + md[0]->vlan_tci =3D m->vlan_tci; > + md[0]->vlan_tci_outer =3D m->vlan_tci_outer; > + md[0]->ol_flags =3D m->ol_flags; > + md[0]->tx_offload =3D m->tx_offload; > + > + return nb_seg; > +} > + > +/* > + * Allocate a new mbuf with up to tx_pkt_nb_segs segments. > + * Copy packet contents and offload information into then new segmented = mbuf. > + */ > +static struct rte_mbuf * > +pkt_copy_split(const struct rte_mbuf *pkt) { > + int32_t n, rc; > + uint32_t i, len, nb_seg; > + struct rte_mempool *mp; > + uint16_t seglen[RTE_MAX_SEGS_PER_PKT]; > + struct rte_mbuf *p, *md[RTE_MAX_SEGS_PER_PKT]; > + > + mp =3D current_fwd_lcore()->mbp; > + > + if (tx_pkt_split =3D=3D TX_PKT_SPLIT_RND) > + nb_seg =3D random() % tx_pkt_nb_segs + 1; > + else > + nb_seg =3D tx_pkt_nb_segs; > + > + memcpy(seglen, tx_pkt_seg_lengths, nb_seg * sizeof(seglen[0])); > + > + /* calculate number of segments to use and their length. */ > + len =3D 0; > + for (i =3D 0; i !=3D nb_seg && len < pkt->pkt_len; i++) { > + len +=3D seglen[i]; > + md[i] =3D NULL; > + } > + > + n =3D pkt->pkt_len - len; > + > + /* update size of the last segment to fit rest of the packet */ > + if (n >=3D 0) { > + seglen[i - 1] +=3D n; > + len +=3D n; > + } > + > + nb_seg =3D i; > + while (i !=3D 0) { > + p =3D rte_pktmbuf_alloc(mp); > + if (p =3D=3D NULL) { > + RTE_LOG(ERR, USER1, > + "failed to allocate %u-th of %u mbuf " > + "from mempool: %s\n", > + nb_seg - i, nb_seg, mp->name); > + break; > + } > + > + md[--i] =3D p; > + if (rte_pktmbuf_tailroom(md[i]) < seglen[i]) { > + RTE_LOG(ERR, USER1, "mempool %s, %u-th segment: " > + "expected seglen: %u, " > + "actual mbuf tailroom: %u\n", > + mp->name, i, seglen[i], > + rte_pktmbuf_tailroom(md[i])); > + break; > + } > + } > + > + /* all mbufs successfully allocated, do copy */ > + if (i =3D=3D 0) { > + rc =3D mbuf_copy_split(pkt, md, seglen, nb_seg); > + if (rc < 0) > + RTE_LOG(ERR, USER1, > + "mbuf_copy_split for %p(len=3D%u, nb_seg=3D%hhu) " > + "into %u segments failed with error code: %d\n", > + pkt, pkt->pkt_len, pkt->nb_segs, nb_seg, rc); > + > + /* figure out how many mbufs to free. */ > + i =3D RTE_MAX(rc, 0); > + } > + > + /* free unused mbufs */ > + for (; i !=3D nb_seg; i++) { > + rte_pktmbuf_free_seg(md[i]); > + md[i] =3D NULL; > + } > + > + return md[0]; > +} > + > +/* > + * Forwarding of packets in I/O mode. > + * Forward packets with tx_prep. > + * This is the fastest possible forwarding operation, as it does not > +access > + * to packets data. > + */ > +static void > +pkt_burst_txprep_forward(struct fwd_stream *fs) { > + struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; > + struct rte_mbuf *p; > + struct rte_port *txp; > + int i; > + uint16_t nb_rx; > + uint16_t nb_prep; > + uint16_t nb_tx; > +#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES > + uint64_t start_tsc; > + uint64_t end_tsc; > + uint64_t core_cycles; > +#endif > + uint16_t tso_segsz =3D 0; > + uint64_t ol_flags =3D 0; > + > + struct ether_hdr *eth_hdr; > + struct vlan_hdr *vlan_hdr; > + struct ipv4_hdr *ipv4_hdr; > + struct ipv6_hdr *ipv6_hdr; > + struct tcp_hdr *tcp_hdr; > + char *l3_hdr =3D NULL; > + > + uint8_t l4_proto =3D 0; > + > +#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES > + start_tsc =3D rte_rdtsc(); > +#endif > + > + /* > + * Receive a burst of packets and forward them. > + */ > + nb_rx =3D rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, > + nb_pkt_per_burst); > + if (unlikely(nb_rx =3D=3D 0)) > + return; > + > + txp =3D &ports[fs->tx_port]; > + tso_segsz =3D txp->tso_segsz; > + > + for (i =3D 0; i < nb_rx; i++) { > + > + eth_hdr =3D rte_pktmbuf_mtod(pkts_burst[i], struct ether_hdr *); > + ether_addr_copy(&peer_eth_addrs[fs->peer_addr], > + ð_hdr->d_addr); > + ether_addr_copy(&ports[fs->tx_port].eth_addr, > + ð_hdr->s_addr); > + > + uint16_t ether_type =3D eth_hdr->ether_type; > + > + pkts_burst[i]->l2_len =3D sizeof(struct ether_hdr); > + > + ol_flags =3D 0; > + > + if (tso_segsz > 0) > + ol_flags |=3D PKT_TX_TCP_SEG; > + > + if (ether_type =3D=3D _htons(ETHER_TYPE_VLAN)) { > + ol_flags |=3D PKT_TX_VLAN_PKT; > + vlan_hdr =3D (struct vlan_hdr *)(eth_hdr + 1); > + pkts_burst[i]->l2_len +=3D sizeof(struct vlan_hdr); > + ether_type =3D vlan_hdr->eth_proto; > + } > + > + switch (ether_type) { > + case _htons(ETHER_TYPE_IPv4): > + ol_flags |=3D (PKT_TX_IPV4 | PKT_TX_IP_CKSUM); > + pkts_burst[i]->l3_len =3D sizeof(struct ipv4_hdr); > + pkts_burst[i]->l4_len =3D sizeof(struct tcp_hdr); > + > + ipv4_hdr =3D (struct ipv4_hdr *)((char *)eth_hdr + > + pkts_burst[i]->l2_len); > + l3_hdr =3D (char *)ipv4_hdr; > + pkts_burst[i]->l3_len =3D (ipv4_hdr->version_ihl & 0x0f) * 4; > + l4_proto =3D ipv4_hdr->next_proto_id; > + > + break; > + case _htons(ETHER_TYPE_IPv6): > + ol_flags |=3D PKT_TX_IPV6; > + > + ipv6_hdr =3D (struct ipv6_hdr *)((char *)eth_hdr + > + pkts_burst[i]->l2_len); > + l3_hdr =3D (char *)ipv6_hdr; > + l4_proto =3D ipv6_hdr->proto; > + pkts_burst[i]->l3_len =3D sizeof(struct ipv6_hdr); > + break; > + default: > + printf("Unknown packet type\n"); > + break; > + } > + > + if (l4_proto =3D=3D IPPROTO_TCP) { > + ol_flags |=3D PKT_TX_TCP_CKSUM; > + tcp_hdr =3D (struct tcp_hdr *)(l3_hdr + pkts_burst[i]->l3_len); > + pkts_burst[i]->l4_len =3D (tcp_hdr->data_off & 0xf0) >> 2; > + } else if (l4_proto =3D=3D IPPROTO_UDP) { > + ol_flags |=3D PKT_TX_UDP_CKSUM; > + pkts_burst[i]->l4_len =3D sizeof(struct udp_hdr); > + } > + > + pkts_burst[i]->tso_segsz =3D tso_segsz; > + pkts_burst[i]->ol_flags =3D ol_flags; > + > + /* Do split & copy for the packet. */ > + if (tx_pkt_split !=3D TX_PKT_SPLIT_OFF) { > + p =3D pkt_copy_split(pkts_burst[i]); > + if (p !=3D NULL) { > + rte_pktmbuf_free(pkts_burst[i]); > + pkts_burst[i] =3D p; > + } > + } > + > + /* if verbose mode is enabled, dump debug info */ > + if (verbose_level > 0) { > + printf("l2_len=3D%d, l3_len=3D%d, l4_len=3D%d, nb_segs=3D%d, tso_segz= =3D%d\n", > + pkts_burst[i]->l2_len, pkts_burst[i]->l3_len, > + pkts_burst[i]->l4_len, pkts_burst[i]->nb_segs, > + pkts_burst[i]->tso_segsz); > + } > + } > + > + /* > + * Prepare burst to transmit > + */ > + nb_prep =3D rte_eth_tx_prep(fs->tx_port, fs->tx_queue, pkts_burst, > +nb_rx); > + > + if (nb_prep < nb_rx) > + printf("Preparing packet burst to transmit failed: %s\n", > + rte_strerror(rte_errno)); > + > +#ifdef RTE_TEST_PMD_RECORD_BURST_STATS > + fs->rx_burst_stats.pkt_burst_spread[nb_rx]++; > +#endif > + fs->rx_packets +=3D nb_rx; > + nb_tx =3D rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pr= ep); > + fs->tx_packets +=3D nb_tx; > +#ifdef RTE_TEST_PMD_RECORD_BURST_STATS > + fs->tx_burst_stats.pkt_burst_spread[nb_tx]++; > +#endif > + if (unlikely(nb_tx < nb_rx)) { > + fs->fwd_dropped +=3D (nb_rx - nb_tx); > + do { > + rte_pktmbuf_free(pkts_burst[nb_tx]); > + } while (++nb_tx < nb_rx); > + } > +#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES > + end_tsc =3D rte_rdtsc(); > + core_cycles =3D (end_tsc - start_tsc); > + fs->core_cycles =3D (uint64_t) (fs->core_cycles + core_cycles); #endif = } > + > +static void > +txprep_fwd_begin(portid_t pi) > +{ > + struct rte_eth_dev_info dev_info; > + > + rte_eth_dev_info_get(pi, &dev_info); > + printf(" nb_seg_max=3D%d, nb_mtu_seg_max=3D%d\n", > + dev_info.tx_desc_lim.nb_seg_max, > + dev_info.tx_desc_lim.nb_mtu_seg_max); > +} > + > +static void > +txprep_fwd_end(portid_t pi __rte_unused) { > + printf("txprep_fwd_end\n"); > +} > + > +struct fwd_engine txprep_fwd_engine =3D { > + .fwd_mode_name =3D "txprep", > + .port_fwd_begin =3D txprep_fwd_begin, > + .port_fwd_end =3D txprep_fwd_end, > + .packet_fwd =3D pkt_burst_txprep_forward, > +}; > -- > 1.7.9.5