From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F075DA052A; Tue, 26 Jan 2021 01:44:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C026141181; Tue, 26 Jan 2021 01:44:16 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 765B9141175; Tue, 26 Jan 2021 01:44:14 +0100 (CET) IronPort-SDR: 4iUBf/ARxHVfjJeI4HQgix9qeGKKSzlu0O9VCwvx0GfbhJoRw88utkux+HHwIKGzG8YTCJ7XKN D5WALqUvO1Ig== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="179973631" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="179973631" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:44:12 -0800 IronPort-SDR: 6GROjitcVWgvctwXzkmRxyoBShXkHbK7myZBTRhdgoeE1H4FgGyxacxgrQMb6pYHfPpgMT70mk +fjeVlTEtdhQ== X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="361755468" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.226.76]) ([10.213.226.76]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jan 2021 16:44:10 -0800 To: Lance Richardson Cc: Wenzhuo Lu , Xiaoyun Li , Bernard Iremonger , Steve Yang , dev@dpdk.org, stable@dpdk.org, oulijun@huawei.com, wisamm@mellanox.com, lihuisong@huawei.com References: <20210125083202.38267-1-stevex.yang@intel.com> <20210125181548.2713326-1-ferruh.yigit@intel.com> From: Ferruh Yigit Message-ID: <1efbcf83-8b7f-682c-7494-44cacef55e36@intel.com> Date: Tue, 26 Jan 2021 00:44:06 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v5] app/testpmd: fix setting maximum packet length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/25/2021 7:41 PM, Lance Richardson wrote: > On Mon, Jan 25, 2021 at 1:15 PM Ferruh Yigit wrote: >> >> From: Steve Yang >> >> "port config all max-pkt-len" command fails because it doesn't set the >> 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag properly. >> >> Commit in the fixes line moved the 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload >> flag update from 'cmd_config_max_pkt_len_parsed()' to 'init_config()'. >> 'init_config()' function is only called during testpmd startup, but the >> flag status needs to be calculated whenever 'max_rx_pkt_len' changes. >> >> The issue can be reproduce as [1], where the 'max-pkt-len' reduced and >> 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag should be cleared but it >> didn't. >> >> Adding the 'update_jumbo_frame_offload()' helper function to update >> 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag and 'max_rx_pkt_len'. This >> function is called both by 'init_config()' and >> 'cmd_config_max_pkt_len_parsed()'. >> >> Default 'max-pkt-len' value set to zero, 'update_jumbo_frame_offload()' >> updates it to "RTE_ETHER_MTU + PMD specific Ethernet overhead" when it >> is zero. >> If '--max-pkt-len=N' argument provided, it will be used instead. >> And with each "port config all max-pkt-len" command, the >> 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag, 'max-pkt-len' and MTU is >> updated. >> >> [1] > > > >> +/* >> + * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload, >> + * MTU is also aligned if JUMBO_FRAME offload is not set. >> + * >> + * port->dev_info should be get before calling this function. > > Should this be "port->dev_info should be set ..." instead? > Ack. > > > >> + if (rx_offloads != port->dev_conf.rxmode.offloads) { >> + uint16_t qid; >> + >> + port->dev_conf.rxmode.offloads = rx_offloads; >> + >> + /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */ >> + for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) { >> + if (on) >> + port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; >> + else >> + port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; >> + } > > Is it correct to set per-queue offloads that aren't advertised by the PMD > as supported in rx_queue_offload_capa? > 'port->rx_conf[]' is testpmd struct, and 'port->dev_conf.rxmode.offloads' values are reflected to 'port->rx_conf[].offloads' for all queues. We should set the offload in 'port->rx_conf[].offloads' if it is set in 'port->dev_conf.rxmode.offloads'. If a port has capability for 'JUMBO_FRAME', 'port->rx_conf[].offloads' can have it. And the port level capability is already checked above. >> + } >> + >> + /* If JUMBO_FRAME is set MTU conversion done by ethdev layer, >> + * if unset do it here >> + */ >> + if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) { >> + ret = rte_eth_dev_set_mtu(portid, >> + port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead); >> + if (ret) >> + printf("Failed to set MTU to %u for port %u\n", >> + port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead, >> + portid); >> + } >> + >> + return 0; >> +} >> + > > Applied and tested with a few iterations of configuring max packet size > back and forth between jumbo and non-jumbo sizes, also tried setting > max packet size using the command-line option, all seems good based > on rx offloads and packet forwarding. > > Two minor questions above, otherwise LGTM. > Thanks for testing. I will wait for more comments before new version.