From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F738A09E4; Thu, 28 Jan 2021 23:12:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC56B40682; Thu, 28 Jan 2021 23:12:31 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 0E8B54067A; Thu, 28 Jan 2021 23:12:29 +0100 (CET) IronPort-SDR: Jabx/igCBaPPzh8VD/KjBJC45XMWUz5/xtYSSF1iD1jkXQ7LCE2DFylG9rN6Yect7BtDWDpay6 JqaPgZh13Nlg== X-IronPort-AV: E=McAfee;i="6000,8403,9878"; a="179540683" X-IronPort-AV: E=Sophos;i="5.79,383,1602572400"; d="scan'208";a="179540683" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2021 14:12:28 -0800 IronPort-SDR: GSwpjQFnrziiFXNoWi3wRGPmo2FDYcuMLgJ8M4dXyMjBQBjKtozodbSeEYIciMuNOdx0arI0qo dKkBdAezJVzg== X-IronPort-AV: E=Sophos;i="5.79,383,1602572400"; d="scan'208";a="389020521" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.197.127]) ([10.213.197.127]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jan 2021 14:12:26 -0800 To: Lance Richardson Cc: Wenzhuo Lu , Xiaoyun Li , Bernard Iremonger , Steve Yang , dev@dpdk.org, stable@dpdk.org, oulijun@huawei.com, wisamm@mellanox.com, lihuisong@huawei.com References: <20210125083202.38267-1-stevex.yang@intel.com> <20210125181548.2713326-1-ferruh.yigit@intel.com> <1efbcf83-8b7f-682c-7494-44cacef55e36@intel.com> <21284291-2b31-3518-5883-33237dd30bea@intel.com> From: Ferruh Yigit Message-ID: <13d91e6a-9b9d-3f02-c6e0-6827264ea7b9@intel.com> Date: Thu, 28 Jan 2021 22:12:22 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH v5] app/testpmd: fix setting maximum packet length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/28/2021 9:36 PM, Lance Richardson wrote: > On Tue, Jan 26, 2021 at 6:01 AM Ferruh Yigit wrote: >> >> On 1/26/2021 3:45 AM, Lance Richardson wrote: >>> On Mon, Jan 25, 2021 at 7:44 PM Ferruh Yigit wrote: >>>> >>>>>> + if (rx_offloads != port->dev_conf.rxmode.offloads) { >>>>>> + uint16_t qid; >>>>>> + >>>>>> + port->dev_conf.rxmode.offloads = rx_offloads; >>>>>> + >>>>>> + /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */ >>>>>> + for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) { >>>>>> + if (on) >>>>>> + port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; >>>>>> + else >>>>>> + port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; >>>>>> + } >>>>> >>>>> Is it correct to set per-queue offloads that aren't advertised by the PMD >>>>> as supported in rx_queue_offload_capa? >>>>> >>>> >>>> 'port->rx_conf[]' is testpmd struct, and 'port->dev_conf.rxmode.offloads' values >>>> are reflected to 'port->rx_conf[].offloads' for all queues. >>>> >>>> We should set the offload in 'port->rx_conf[].offloads' if it is set in >>>> 'port->dev_conf.rxmode.offloads'. >>>> >>>> If a port has capability for 'JUMBO_FRAME', 'port->rx_conf[].offloads' can have >>>> it. And the port level capability is already checked above. >>>> >>> >>> I'm still not 100% clear about the per-queue offload question. >>> >>> With this patch, and jumbo max packet size configured (on the command >>> line in this case), I see: >>> >>> testpmd> show port 0 rx_offload configuration >>> Rx Offloading Configuration of port 0 : >>> Port : JUMBO_FRAME >>> Queue[ 0] : JUMBO_FRAME >>> >>> testpmd> show port 0 rx_offload capabilities >>> Rx Offloading Capabilities of port 0 : >>> Per Queue : >>> Per Port : VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM TCP_LRO >>> OUTER_IPV4_CKSUM VLAN_FILTER VLAN_EXTEND JUMBO_FRAME SCATTER TIMESTAMP >>> KEEP_CRC OUTER_UDP_CKSUM RSS_HASH >>> >> >> The port level offload is applied to all queues on the port, testpmd config >> structure reflects this logic in implementation. >> If Rx offload X is set for a port, it is set for all Rx queue offloads, this is >> not new behavior and not related to this patch. >> > OK, is this purely for display purposes within testpmd? I ask because > it appears that all PMDs supporting per-queue offload configuration already > take care of combining port-level and per-queue offloads within their > tx_queue_setup()/rx_queue_setup() functions and then track the combined > set of offloads within a per-queue field, e.g. this line is common to > e1000/i40e/ionic/ixgbe/octeontx2/thunderx/txgbe rx_queue_setup() > implementations: > offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; > > And rte_ethdev.h says: > No need to repeat any bit in rx_conf->offloads which has already been > enabled in rte_eth_dev_configure() at port level. An offloading enabled > at port level can't be disabled at queue level. > > Which I suppose confirms that if testpmd is combining per-port and per- > queue offloads, it's just for the purposes of testpmd. > Yes it is just for purposed of testpmd (application), but doesn't need to be just limited to display purpose, testpmd keeps the config locally for any need. > Apologies for worrying at this even more, I just wanted to be sure that > I understand what the PMD is expected to do. > PMDs should not be getting these offloads in queue setup, since ethdev layer strips the port level offloads, I mean: if port level offloads: X, Y And applications provide following for queue_setup(): a) X b) Y c) X, Y For all three PMD receives empty queue offload request. d) X, Y, Z PMD gets Z if it is in drivers queue specific capabilities.