From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65767A04B6; Thu, 24 Sep 2020 17:33:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 561F51DDE5; Thu, 24 Sep 2020 17:33:01 +0200 (CEST) Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by dpdk.org (Postfix) with ESMTP id 0E1BA1DCAF for ; Thu, 24 Sep 2020 17:33:00 +0200 (CEST) Received: by mail-pf1-f196.google.com with SMTP id d6so2115501pfn.9 for ; Thu, 24 Sep 2020 08:33:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=reasl02O8rM7vhqGFFjzl4DkS83Bu/aKJvYCz3NR7eI=; b=a0DM+Cw0z5FTObGuk11d+K7CPLtnjk2HmPkExMJuOc0iITpIt3cvjs1rlDq3PBhUbV exrqAaZw9zy1XTDpLaD5L7cxCez0riU/siUqPHm8L9vpn/zHSvwY9Wh1F8kFf1GvHEvX S+aCBeu2CvYN5VujtKQDLIfE4+22c2jtNPDVmwkASdAcwzHJawwMFHb3XBepjiJ2Bfeg K5cn2QixvkjWi7wulOrXhEZiAUlvF0JTXrnW74IsdleBdnfx4CF4v8tO6W38qLsfCI+9 k0CsQq/lTDB47Cm1gMFJzOR65LUeRRo/it+plfr4bscyGbO98hPm79+MpHCc7wd9Eh+8 gmJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=reasl02O8rM7vhqGFFjzl4DkS83Bu/aKJvYCz3NR7eI=; b=WiLPfcl0M36lF2TSVSoKcM4KyqB9z1/379v+kueSpPqL57mfdmcI6+AS52/EsNLdM8 XKF3lU3lP56ialk3l3nrO8q2LHmdiJaLTz0hohbgqOfvSPlkRg2mdJkrra4SbgF5sZVA jr2vaok5H/Tujr6fCJ9ThBQ9uQfxMM9zmN89JYrSKjN0DtASNWr8sxByTldPnZukyTJ0 f0mQcoY7lFgphQf+IfvJdyJwmG53MJiSNDNb4RSER/JSVlaGYQyPNvzBai1p/LDQEELd HPd/KjULLxkJFcvPwPHWYKpQVB9FWc+5dCvRCpW1toZgygaHiNjqgJbd/962MtbQIkUm HHKA== X-Gm-Message-State: AOAM531CgfuA5Zkm8CP/xUNHOmHLtkiEl5m6rI1zCmI9rxwwjqU78LHO ExKkMXrgAtCXDgE/6k7uqlkx9OcB4osQrkWg X-Google-Smtp-Source: ABdhPJy3ITpzgSYtColLRDdTKegXnvWCNYsh787GzygsBgsz8vCyygXTZLvlLTivPqHHSnF9QUfZbQ== X-Received: by 2002:a63:c948:: with SMTP id y8mr198938pgg.164.1600961580030; Thu, 24 Sep 2020 08:33:00 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n128sm3215915pga.5.2020.09.24.08.32.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Sep 2020 08:32:59 -0700 (PDT) Date: Thu, 24 Sep 2020 07:56:55 -0700 From: Stephen Hemminger To: Igor Russkikh Cc: , Rasesh Mody , Devendra Singh Rawat , Wenzhuo Lu , Beilei Xing , Bernard Iremonger Message-ID: <20200924075647.3e160c0b@hermes.lan> In-Reply-To: <20200924113414.483-1-irusskikh@marvell.com> References: <20200924113414.483-1-irusskikh@marvell.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [RFC PATCH] app/testpmd: tx pkt clones parameter in flowgen X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, 24 Sep 2020 14:34:14 +0300 Igor Russkikh wrote: > When testing high performance numbers, it is often that CPU performance > limits the max values device can reach (both in pps and in gbps) > > Here instead of recreating each packet separately, we use clones counter > to resend the same mbuf to the line multiple times. > > PMDs handle that transparently due to reference counting inside of mbuf. > > Verified on Marvell qede and atlantic PMDs. > > Signed-off-by: Igor Russkikh > --- > app/test-pmd/flowgen.c | 100 ++++++++++++++------------ > app/test-pmd/parameters.c | 12 ++++ > app/test-pmd/testpmd.c | 1 + > app/test-pmd/testpmd.h | 1 + > doc/guides/testpmd_app_ug/run_app.rst | 7 ++ > 5 files changed, 74 insertions(+), 47 deletions(-) > > diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c > index acf3e2460..b6f6e7a0e 100644 > --- a/app/test-pmd/flowgen.c > +++ b/app/test-pmd/flowgen.c > @@ -94,6 +94,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs) > uint16_t nb_rx; > uint16_t nb_tx; > uint16_t nb_pkt; > + uint16_t nb_clones = nb_pkt_clones; > uint16_t i; > uint32_t retry; > uint64_t tx_offloads; > @@ -123,53 +124,58 @@ pkt_burst_flow_gen(struct fwd_stream *fs) > ol_flags |= PKT_TX_MACSEC; > > for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { > - pkt = rte_mbuf_raw_alloc(mbp); > - if (!pkt) > - break; > - > - pkt->data_len = pkt_size; > - pkt->next = NULL; > - > - /* Initialize Ethernet header. */ > - eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > - rte_ether_addr_copy(&cfg_ether_dst, ð_hdr->d_addr); > - rte_ether_addr_copy(&cfg_ether_src, ð_hdr->s_addr); > - eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); > - > - /* Initialize IP header. */ > - ip_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1); > - memset(ip_hdr, 0, sizeof(*ip_hdr)); > - ip_hdr->version_ihl = RTE_IPV4_VHL_DEF; > - ip_hdr->type_of_service = 0; > - ip_hdr->fragment_offset = 0; > - ip_hdr->time_to_live = IP_DEFTTL; > - ip_hdr->next_proto_id = IPPROTO_UDP; > - ip_hdr->packet_id = 0; > - ip_hdr->src_addr = rte_cpu_to_be_32(cfg_ip_src); > - ip_hdr->dst_addr = rte_cpu_to_be_32(cfg_ip_dst + > - next_flow); > - ip_hdr->total_length = RTE_CPU_TO_BE_16(pkt_size - > - sizeof(*eth_hdr)); > - ip_hdr->hdr_checksum = ip_sum((unaligned_uint16_t *)ip_hdr, > - sizeof(*ip_hdr)); > - > - /* Initialize UDP header. */ > - udp_hdr = (struct rte_udp_hdr *)(ip_hdr + 1); > - udp_hdr->src_port = rte_cpu_to_be_16(cfg_udp_src); > - udp_hdr->dst_port = rte_cpu_to_be_16(cfg_udp_dst); > - udp_hdr->dgram_cksum = 0; /* No UDP checksum. */ > - udp_hdr->dgram_len = RTE_CPU_TO_BE_16(pkt_size - > - sizeof(*eth_hdr) - > - sizeof(*ip_hdr)); > - pkt->nb_segs = 1; > - pkt->pkt_len = pkt_size; > - pkt->ol_flags &= EXT_ATTACHED_MBUF; > - pkt->ol_flags |= ol_flags; > - pkt->vlan_tci = vlan_tci; > - pkt->vlan_tci_outer = vlan_tci_outer; > - pkt->l2_len = sizeof(struct rte_ether_hdr); > - pkt->l3_len = sizeof(struct rte_ipv4_hdr); > - pkts_burst[nb_pkt] = pkt; > + if (!nb_pkt || !nb_clones) { > + nb_clones = nb_pkt_clones; > + pkt = rte_mbuf_raw_alloc(mbp); > + if (!pkt) > + break; > + > + pkt->data_len = pkt_size; > + pkt->next = NULL; > + > + /* Initialize Ethernet header. */ > + eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > + rte_ether_addr_copy(&cfg_ether_dst, ð_hdr->d_addr); > + rte_ether_addr_copy(&cfg_ether_src, ð_hdr->s_addr); > + eth_hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4); > + > + /* Initialize IP header. */ > + ip_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1); > + memset(ip_hdr, 0, sizeof(*ip_hdr)); > + ip_hdr->version_ihl = RTE_IPV4_VHL_DEF; > + ip_hdr->type_of_service = 0; > + ip_hdr->fragment_offset = 0; > + ip_hdr->time_to_live = IP_DEFTTL; > + ip_hdr->next_proto_id = IPPROTO_UDP; > + ip_hdr->packet_id = 0; > + ip_hdr->src_addr = rte_cpu_to_be_32(cfg_ip_src); > + ip_hdr->dst_addr = rte_cpu_to_be_32(cfg_ip_dst + > + next_flow); > + ip_hdr->total_length = RTE_CPU_TO_BE_16(pkt_size - > + sizeof(*eth_hdr)); > + ip_hdr->hdr_checksum = ip_sum((unaligned_uint16_t *)ip_hdr, > + sizeof(*ip_hdr)); > + > + /* Initialize UDP header. */ > + udp_hdr = (struct rte_udp_hdr *)(ip_hdr + 1); > + udp_hdr->src_port = rte_cpu_to_be_16(cfg_udp_src); > + udp_hdr->dst_port = rte_cpu_to_be_16(cfg_udp_dst); > + udp_hdr->dgram_cksum = 0; /* No UDP checksum. */ > + udp_hdr->dgram_len = RTE_CPU_TO_BE_16(pkt_size - > + sizeof(*eth_hdr) - > + sizeof(*ip_hdr)); > + pkt->nb_segs = 1; > + pkt->pkt_len = pkt_size; > + pkt->ol_flags &= EXT_ATTACHED_MBUF; > + pkt->ol_flags |= ol_flags; > + pkt->vlan_tci = vlan_tci; > + pkt->vlan_tci_outer = vlan_tci_outer; > + pkt->l2_len = sizeof(struct rte_ether_hdr); > + pkt->l3_len = sizeof(struct rte_ipv4_hdr); > + } else { > + nb_clones--; > + } > + pkts_burst[nb_pkt] = pkt; > > next_flow = (next_flow + 1) % cfg_n_flows; > } This doesn't look safe. You can't just send same mbuf N times without incrementing the reference count.