From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59618A0C43; Thu, 21 Oct 2021 21:46:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 15BD940040; Thu, 21 Oct 2021 21:46:06 +0200 (CEST) Received: from mail-io1-f45.google.com (mail-io1-f45.google.com [209.85.166.45]) by mails.dpdk.org (Postfix) with ESMTP id 08D8B40683 for ; Thu, 21 Oct 2021 21:46:05 +0200 (CEST) Received: by mail-io1-f45.google.com with SMTP id n77so2398133iod.13 for ; Thu, 21 Oct 2021 12:46:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=39UsuevPIwnngVk/91ujCDcPPFW5Qi15Td+je3xeF3I=; b=M3RKs/eNRdOsFamcDCjHQZ3zlBbVqEawUlqaEt3lmxlKTs15sY+kdiUjWmqTa5Jbg0 ob1fL2ZC/pNitBhP50pDWuNW1XivCvfQTyEuCi12OeLjvqtYsPkBGjz9D8gyjrG9Z6jE zEVKThqXmmdA4E0Kt8iCJ/PPtkvQnzDYSLiTA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=39UsuevPIwnngVk/91ujCDcPPFW5Qi15Td+je3xeF3I=; b=CfFpW80DhY3YyX88FV3uCrJ/GcexlO5LtltSCerMnbfIGEAm8wDWYtuBjtDsr22C2P +VVf00l5nrPl2KWNwl8Oyff214Ke9Kz98CEdMKnsPumf9+4sx+bJq4OogSsxhuiBoQiM YoBDGcXtP8eHl+giZzIk1Cl4gYDhsIrTPMOcWy1BUp7L3cPRKIFyZQkcnGKHLjXlaJ0u +952P4v7o7jFPN0H+C26wstwtzzv7oCdZ/GXxUqE8XKV6C1JEhSlR3KceEp/4AahaGi3 03TXAhn//s6a+n0fPK2ZJ4TXR83nvvaJvhkw3hLTgKN9lxSKF/Kwz2lZiHIcqDrDWMXR gqoA== X-Gm-Message-State: AOAM533G8XARCZrxs8Bm/mZc/CLyaXwmSpZjZ6od/b1NkiZHwuqO4RIb dvvODFgDVkVdAW7ks0NUV8yNZj7nvE6ikhoFJi0F4g== X-Google-Smtp-Source: ABdhPJyXHxkKbIihSnhNrcy+bSuwW8p7+i8gO4ajxGkHxL2mEW5ReR73wfIbc39rK2dFiHt1phxcreJ4Olp+exPjINg= X-Received: by 2002:a05:6638:140f:: with SMTP id k15mr5228168jad.33.1634845564324; Thu, 21 Oct 2021 12:46:04 -0700 (PDT) MIME-Version: 1.0 References: <20210727034204.20649-1-xuemingl@nvidia.com> <20211021104142.2649060-1-xuemingl@nvidia.com> <20211021104142.2649060-5-xuemingl@nvidia.com> In-Reply-To: <20211021104142.2649060-5-xuemingl@nvidia.com> From: Ajit Khaparde Date: Thu, 21 Oct 2021 12:45:48 -0700 Message-ID: To: Xueming Li Cc: dpdk-dev , Zhang Yuying , Li Xiaoyun , Jerin Jacob , Ferruh Yigit , Andrew Rybchenko , Viacheslav Ovsiienko , Thomas Monjalon , Lior Margalit , Ananyev Konstantin Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v13 4/7] app/testpmd: new parameter to enable shared Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Oct 21, 2021 at 3:42 AM Xueming Li wrote: > > Adds "--rxq-share=X" parameter to enable shared RxQ. > > Rx queue is shared if device supports, otherwise fallback to standard > RxQ. > > Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX, > implies all ports join share group 1. Queue ID is mapped equally with > shared Rx queue ID. > > Signed-off-by: Xueming Li > Acked-by: Thomas Monjalon Acked-by: Ajit Khaparde > --- > app/test-pmd/config.c | 7 ++++++- > app/test-pmd/parameters.c | 13 +++++++++++++ > app/test-pmd/testpmd.c | 20 +++++++++++++++++--- > app/test-pmd/testpmd.h | 2 ++ > doc/guides/testpmd_app_ug/run_app.rst | 6 ++++++ > 5 files changed, 44 insertions(+), 4 deletions(-) > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > index db36ca41b94..e4bbf457916 100644 > --- a/app/test-pmd/config.c > +++ b/app/test-pmd/config.c > @@ -2890,7 +2890,12 @@ rxtx_config_display(void) > printf(" RX threshold registers: pthresh=%d hthresh=%d " > " wthresh=%d\n", > pthresh_tmp, hthresh_tmp, wthresh_tmp); > - printf(" RX Offloads=0x%"PRIx64"\n", offloads_tmp); > + printf(" RX Offloads=0x%"PRIx64, offloads_tmp); > + if (rx_conf->share_group > 0) > + printf(" share_group=%u share_qid=%u", > + rx_conf->share_group, > + rx_conf->share_qid); > + printf("\n"); > } > > /* per tx queue config only for first queue to be less verbose */ > diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c > index 779a721fa05..afc75f6bd21 100644 > --- a/app/test-pmd/parameters.c > +++ b/app/test-pmd/parameters.c > @@ -171,6 +171,7 @@ usage(char* progname) > printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); > printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); > printf(" --eth-link-speed: force link speed.\n"); > + printf(" --rxq-share=X: number of ports per shared Rx queue groups, defaults to UINT32_MAX (1 group)\n"); > printf(" --disable-link-check: disable check on link status when " > "starting/stopping ports.\n"); > printf(" --disable-device-start: do not automatically start port\n"); > @@ -678,6 +679,7 @@ launch_args_parse(int argc, char** argv) > { "rxpkts", 1, 0, 0 }, > { "txpkts", 1, 0, 0 }, > { "txonly-multi-flow", 0, 0, 0 }, > + { "rxq-share", 2, 0, 0 }, > { "eth-link-speed", 1, 0, 0 }, > { "disable-link-check", 0, 0, 0 }, > { "disable-device-start", 0, 0, 0 }, > @@ -1352,6 +1354,17 @@ launch_args_parse(int argc, char** argv) > } > if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow")) > txonly_multi_flow = 1; > + if (!strcmp(lgopts[opt_idx].name, "rxq-share")) { > + if (optarg == NULL) { > + rxq_share = UINT32_MAX; > + } else { > + n = atoi(optarg); > + if (n >= 0) > + rxq_share = (uint32_t)n; > + else > + rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); > + } > + } > if (!strcmp(lgopts[opt_idx].name, "no-flush-rx")) > no_flush_rx = 1; > if (!strcmp(lgopts[opt_idx].name, "eth-link-speed")) { > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c > index af0e79fe6d5..80337bad382 100644 > --- a/app/test-pmd/testpmd.c > +++ b/app/test-pmd/testpmd.c > @@ -502,6 +502,11 @@ uint8_t record_core_cycles; > */ > uint8_t record_burst_stats; > > +/* > + * Number of ports per shared Rx queue group, 0 disable. > + */ > +uint32_t rxq_share; > + > unsigned int num_sockets = 0; > unsigned int socket_ids[RTE_MAX_NUMA_NODES]; > > @@ -3629,14 +3634,23 @@ dev_event_callback(const char *device_name, enum rte_dev_event_type type, > } > > static void > -rxtx_port_config(struct rte_port *port) > +rxtx_port_config(portid_t pid) > { > uint16_t qid; > uint64_t offloads; > + struct rte_port *port = &ports[pid]; > > for (qid = 0; qid < nb_rxq; qid++) { > offloads = port->rx_conf[qid].offloads; > port->rx_conf[qid] = port->dev_info.default_rxconf; > + > + if (rxq_share > 0 && > + (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { > + /* Non-zero share group to enable RxQ share. */ > + port->rx_conf[qid].share_group = pid / rxq_share + 1; > + port->rx_conf[qid].share_qid = qid; /* Equal mapping. */ > + } > + > if (offloads != 0) > port->rx_conf[qid].offloads = offloads; > > @@ -3765,7 +3779,7 @@ init_port_config(void) > } > } > > - rxtx_port_config(port); > + rxtx_port_config(pid); > > ret = eth_macaddr_get_print_err(pid, &port->eth_addr); > if (ret != 0) > @@ -3977,7 +3991,7 @@ init_port_dcb_config(portid_t pid, > > memcpy(&rte_port->dev_conf, &port_conf, sizeof(struct rte_eth_conf)); > > - rxtx_port_config(rte_port); > + rxtx_port_config(pid); > /* VLAN filter */ > rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; > for (i = 0; i < RTE_DIM(vlan_tags); i++) > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h > index e3995d24ab5..63f9913deb6 100644 > --- a/app/test-pmd/testpmd.h > +++ b/app/test-pmd/testpmd.h > @@ -524,6 +524,8 @@ extern enum tx_pkt_split tx_pkt_split; > > extern uint8_t txonly_multi_flow; > > +extern uint32_t rxq_share; > + > extern uint16_t nb_pkt_per_burst; > extern uint16_t nb_pkt_flowgen_clones; > extern int nb_flows_flowgen; > diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst > index 8ff7ab85369..faa3efb902c 100644 > --- a/doc/guides/testpmd_app_ug/run_app.rst > +++ b/doc/guides/testpmd_app_ug/run_app.rst > @@ -395,6 +395,12 @@ The command line options are: > > Generate multiple flows in txonly mode. > > +* ``--rxq-share=[X]`` > + > + Create queues in shared Rx queue mode if device supports. > + Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX, > + implies all ports join share group 1. > + > * ``--eth-link-speed`` > > Set a forced link speed to the ethernet port:: > -- > 2.33.0 >