From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EC37A09EF; Wed, 16 Dec 2020 13:12:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F285C9BE; Wed, 16 Dec 2020 13:12:20 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id F055AC9B0; Wed, 16 Dec 2020 13:12:16 +0100 (CET) IronPort-SDR: hjts1pYmNazTBEui4Zvn4V/1Z4hjEP4rMoXx+C4d2xysW9/suJzFam5phA2chvjQ5sZ8ABN3DB RuhmeNNYcEdQ== X-IronPort-AV: E=McAfee;i="6000,8403,9836"; a="171540427" X-IronPort-AV: E=Sophos;i="5.78,424,1599548400"; d="scan'208";a="171540427" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2020 04:12:16 -0800 IronPort-SDR: 98w3jvDECw6lFKrKDXmhldRNfkGRORR7y/dX5eNny7UQkhMk+H0stlJE7RLqKHDdRLKJsGCXg8 /9XshiZn+2og== X-IronPort-AV: E=Sophos;i="5.78,424,1599548400"; d="scan'208";a="368992938" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.233.132]) ([10.213.233.132]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2020 04:12:14 -0800 To: Slava Ovsiienko , Andrew Boyer Cc: "dev@dpdk.org" , NBU-Contact-Thomas Monjalon , "stable@dpdk.org" References: <1607699265-5238-1-git-send-email-viacheslavo@nvidia.com> <40F5C8DC-8461-43E4-BF37-16FB1D4CA990@pensando.io> From: Ferruh Yigit Message-ID: Date: Wed, 16 Dec 2020 12:12:10 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix segment number check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 12/11/2020 4:14 PM, Slava Ovsiienko wrote: > Hi, Andrew > > Thank you for the review, please, see below. > >> -----Original Message----- >> From: Andrew Boyer >> Sent: Friday, December 11, 2020 18:00 >> To: Slava Ovsiienko >> Cc: dev@dpdk.org; NBU-Contact-Thomas Monjalon ; >> ferruh.yigit@intel.com; stable@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix segment number check >> >> >> >>> On Dec 11, 2020, at 10:07 AM, Viacheslav Ovsiienko >> wrote: >>> >>> The --txpkts command line parameter was silently ignored due to >>> application was unable to check the Tx queue ring sizes for non >>> configured ports [1]. >> >> ... ignored because the application... > OK, will fix. > >> >>> The "set txpkts " was also rejected if there was some >>> stopped or /unconfigured port. >> >> ... was a stopped or unconfigured ... > OK, will fix. > >> >>> >>> This provides the following: >>> >>> - number of segment check is performed against >>> configured Tx queues only >>> >>> - the capability to send single packet is supposed to >>> be very basic and always supported, the setting segment >>> number to 1 is always allowed, no check performed >>> >>> - at the moment of Tx queue setup the descriptor number is >>> checked against configured segment number >>> >>> Fixes: 8dae835d88b7 ("app/testpmd: remove restriction on Tx segments >>> set") >>> Cc: stable@dpdk.org >>> Bugzilla ID: 584 >>> >>> Signed-off-by: Viacheslav Ovsiienko >>> --- >>> app/test-pmd/cmdline.c | 5 +++++ >>> app/test-pmd/config.c | 21 ++++++++++++++++----- >>> app/test-pmd/testpmd.c | 7 ++++++- >>> 3 files changed, 27 insertions(+), 6 deletions(-) >>> >>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index >>> 0d2d6aa..86388a2 100644 >>> --- a/app/test-pmd/cmdline.c >>> +++ b/app/test-pmd/cmdline.c >>> @@ -2798,6 +2798,11 @@ struct cmd_setup_rxtx_queue { >>> if (!numa_support || socket_id == NUMA_NO_CONFIG) >>> socket_id = port->socket_id; >>> >>> + if (port->nb_tx_desc[res->qid] < tx_pkt_nb_segs) { >>> + printf("Failed to setup TX queue: " >> >> setup -> set up > Disagree, it is quite common in testpmd code to use "setup" wording, > I just copy-pasted the message from the neighbor lines. > >> I find it helpful when the numbers are logged in the error message. Like >> “nb_desc 8 < nb_segs 16”. >> >>> + "not enough descriptors\n"); >>> + return; >>> + } >> > Do you think it is worth to be informative so much? OK, will add. > >> Why is there a relationship between the number of descriptors and the >> number of segments? For our device, there isn’t. We can send 16 Tx segments >> per descriptor and (I suppose) you could try to create an 8 descriptor ring. >> >> Maybe this is to protect a simpler device that consumes one descriptor per >> segment? If so, the check would ideally be conditioned on a related device >> capability flag. I’m not sure if there is such a flag today. > There is no correlation between n_desc and n_seg for Tx in mlx5 PMD either. > And there is no information provided how many descriptors should be > provided for the multi-segment packets. > > If we have a look at original commit being fixed > ("app/testpmd: remove restriction on Tx segments set") we'll see: > > - if (nb_segs >= (unsigned) nb_txd) { > - printf("nb segments per TX packets=%u >= nb_txd=%u - ignored\n", > - nb_segs, (unsigned int) nb_txd); > > So, the check was added in replacement for other, more strict, check. > Now we are just improving one a little bit. > Many devices use a descriptor per segment, and if there is no enough free descriptor to fit all segments they won't able to send the packet, I guess this check is to cover them. Out of curiosity, is your device has 16 buffer address fields in the descriptor, can they be utilized to send multiple independent packets in single descriptor? > >> >>> ret = rte_eth_tx_queue_setup(res->portid, >>> res->qid, >>> port->nb_tx_desc[res->qid], >>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index >>> b51de59..a6fccfa 100644 >>> --- a/app/test-pmd/config.c >>> +++ b/app/test-pmd/config.c >>> @@ -3911,12 +3911,18 @@ struct igb_ring_desc_16_bytes { >>> for (queue_id = 0; queue_id < nb_txq; queue_id++) { >>> ret = get_tx_ring_size(port_id, queue_id, &ring_size); >>> >>> - if (ret) >>> + /* Do the check only for the active/configured ports. >> */ >>> + if (ret == -EINVAL) >>> + continue; >>> + if (ret) { >>> + printf("failed to get ring size for TX " >>> + "queue(%u) Port(%u) - txpkts ignored\n", >>> + port_id, queue_id); >>> return true; >>> - >>> + } >>> if (ring_size < nb_segs) { >>> - printf("nb segments per TX packets=%u >= " >>> - "TX queue(%u) ring_size=%u - ignored\n", >>> + printf("nb segments per TX packets=%u >= TX >> " >>> + "queue(%u) ring_size=%u - txpkts >> ignored\n", >>> nb_segs, queue_id, ring_size); >>> return true; >>> } >>> @@ -3932,7 +3938,12 @@ struct igb_ring_desc_16_bytes { >>> uint16_t tx_pkt_len; >>> unsigned int i; >>> >>> - if (nb_segs_is_invalid(nb_segs)) >>> + /* >>> + * For single sengment settings failed check is ignored. >>> + * It is a very basic capability to send the single segment >>> + * packets, suppose it is always supported. >> >> sengment -> segment >> ... to send single segment... >> suppose -> assume > OK, np, will fix. > >> >>> + */ >>> + if (nb_segs > 1 && nb_segs_is_invalid(nb_segs)) >>> return; >>> >>> /* >>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index >>> 33fc0fd..9ea0145 100644 >>> --- a/app/test-pmd/testpmd.c >>> +++ b/app/test-pmd/testpmd.c >>> @@ -2575,6 +2575,11 @@ struct extmem_param { >>> port->need_reconfig_queues = 0; >>> /* setup tx queues */ >>> for (qi = 0; qi < nb_txq; qi++) { >>> + if (port->nb_tx_desc[qi] < tx_pkt_nb_segs) { >>> + printf("Failed to setup TX queue: " >>> + "not enough descriptors\n"); >> >> Same comments as above > OK. > >> >>> + goto fail; >>> + } >>> if ((numa_support) && >>> (txring_numa[pi] != >> NUMA_NO_CONFIG)) >>> diag = rte_eth_tx_queue_setup(pi, qi, >> @@ -2589,7 +2594,7 @@ >>> struct extmem_param { >>> >>> if (diag == 0) >>> continue; >>> - >>> +fail: >>> /* Fail to setup tx queue, return */ >>> if (rte_atomic16_cmpset(&(port- >>> port_status), >>> >> RTE_PORT_HANDLING, >>> -- >>> 1.8.3.1 >>> > > Thanks a lot, I will wait for a while for more comments and provide v2. > > With best regards, Slava >