From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96A63A0547; Fri, 23 Apr 2021 18:13:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 188E9410E1; Fri, 23 Apr 2021 18:13:17 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 8BB9D410D8; Fri, 23 Apr 2021 18:13:14 +0200 (CEST) IronPort-SDR: wMBk6KDx9miG3EAJ7nOY0TndK+tXoV+eVxrnyaVzF45uD9wDrou95UYKwUKITTVEYPPB5z/i5J 61nYvMXHfSaA== X-IronPort-AV: E=McAfee;i="6200,9189,9963"; a="192904926" X-IronPort-AV: E=Sophos;i="5.82,246,1613462400"; d="scan'208";a="192904926" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Apr 2021 09:10:08 -0700 IronPort-SDR: tF+FDygr4RoBC7lxISC8zQWi3qYxvd7gla+TYX6LUv37ArQyjxfaSCjilZrsSajosPtPYFmpos Zpm4rLNpq8pw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,246,1613462400"; d="scan'208";a="535583949" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by orsmga004.jf.intel.com with ESMTP; 23 Apr 2021 09:09:57 -0700 From: Ferruh Yigit To: Viacheslav Ovsiienko , Xiaoyun Li , "Wei Hu (Xavier)" , Chengchang Tang Cc: Ferruh Yigit , dev@dpdk.org, stable@dpdk.org, Andrew Boyer Date: Fri, 23 Apr 2021 17:09:52 +0100 Message-Id: <20210423160952.336272-1-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <1607699265-5238-1-git-send-email-viacheslavo@nvidia.com> References: <1607699265-5238-1-git-send-email-viacheslavo@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2] app/testpmd: fix segment number check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Viacheslav Ovsiienko The --txpkts command line parameter was silently ignored due to application was unable to check the Tx queue ring sizes for non configured ports [1]. The "set txpkts " was also rejected if there was some stopped or /unconfigured port. This provides the following: - If fails to get ring size from the port, this can be because port is not initialized yet, ignore the check and just be sure segment size won't cause an out of bound access. The port descriptor check will be done during Tx setup. - The capability to send single packet is supposed to be very basic and always supported, the setting segment number to 1 is always allowed, no check performed - At the moment of Tx queue setup the descriptor number is checked against configured segment number Bugzilla ID: 584 Fixes: 8dae835d88b7 ("app/testpmd: remove restriction on Tx segments set") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko Signed-off-by: Ferruh Yigit --- Cc: Andrew Boyer v2: * Become more flexible for the '--txpkts' command line, if not able to get the descriptor size from port, ignore the check. ('nb_txd' check was proposed before, this will require '--txd' parameter, but also enforces a specific order on the parameters, instead going with the option to flex the checks for parameter.) --- app/test-pmd/cmdline.c | 4 ++++ app/test-pmd/config.c | 32 ++++++++++++++++++++++++-------- 2 files changed, 28 insertions(+), 8 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 12efbc0cab46..7feba8337781 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -2910,6 +2910,10 @@ cmd_setup_rxtx_queue_parsed( if (!numa_support || socket_id == NUMA_NO_CONFIG) socket_id = port->socket_id; + if (port->nb_tx_desc[res->qid] < tx_pkt_nb_segs) { + printf("Failed to setup TX queue: not enough descriptors\n"); + return; + } ret = rte_eth_tx_queue_setup(res->portid, res->qid, port->nb_tx_desc[res->qid], diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index e189062efde8..a4445a73bfa5 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -3697,13 +3697,15 @@ nb_segs_is_invalid(unsigned int nb_segs) RTE_ETH_FOREACH_DEV(port_id) { for (queue_id = 0; queue_id < nb_txq; queue_id++) { ret = get_tx_ring_size(port_id, queue_id, &ring_size); - - if (ret) - return true; - + if (ret) { + /* Port may not be initialized yet, can't say + * the port is invalid in this stage. + */ + continue; + } if (ring_size < nb_segs) { - printf("nb segments per TX packets=%u >= " - "TX queue(%u) ring_size=%u - ignored\n", + printf("nb segments per TX packets=%u >= TX " + "queue(%u) ring_size=%u - txpkts ignored\n", nb_segs, queue_id, ring_size); return true; } @@ -3719,12 +3721,26 @@ set_tx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) uint16_t tx_pkt_len; unsigned int i; - if (nb_segs_is_invalid(nb_segs)) + /* + * For single segment settings failed check is ignored. + * It is a very basic capability to send the single segment + * packets, suppose it is always supported. + */ + if (nb_segs > 1 && nb_segs_is_invalid(nb_segs)) { + printf("Tx segment size(%u) is not supported - txpkts ignored\n", + nb_segs); return; + } + + if (nb_segs > RTE_MAX_SEGS_PER_PKT) { + printf("Tx segment size(%u) is bigger than max number of segment(%u)\n", + nb_segs, RTE_MAX_SEGS_PER_PKT); + return; + } /* * Check that each segment length is greater or equal than - * the mbuf data sise. + * the mbuf data size. * Check also that the total packet length is greater or equal than the * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) + * 20 + 8). -- 2.30.2