From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C3F3A052A; Tue, 26 Jan 2021 12:01:20 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 379701413EA; Tue, 26 Jan 2021 12:01:20 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 559181413E7; Tue, 26 Jan 2021 12:01:17 +0100 (CET) IronPort-SDR: QJy833s7/6+2HYC/8jeR1adHRflB1V64Ln9t2jX4zOgnpbaBm5ukT1fsuQveVW600YqOzBrgDA 19IohY9a9Zjw== X-IronPort-AV: E=McAfee;i="6000,8403,9875"; a="176373410" X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="176373410" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jan 2021 03:01:14 -0800 IronPort-SDR: le9Pns66zz3FIIsR2mG8q+QoZWZoKx8KdSEvrn1FM5GI5P/PkdAQ/dOW5UNdPxf298yldRQVNJ iA2JYXI/uzlg== X-IronPort-AV: E=Sophos;i="5.79,375,1602572400"; d="scan'208";a="361944464" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.227.53]) ([10.213.227.53]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jan 2021 03:01:12 -0800 To: Lance Richardson Cc: Wenzhuo Lu , Xiaoyun Li , Bernard Iremonger , Steve Yang , dev@dpdk.org, stable@dpdk.org, oulijun@huawei.com, wisamm@mellanox.com, lihuisong@huawei.com References: <20210125083202.38267-1-stevex.yang@intel.com> <20210125181548.2713326-1-ferruh.yigit@intel.com> <1efbcf83-8b7f-682c-7494-44cacef55e36@intel.com> From: Ferruh Yigit Message-ID: <21284291-2b31-3518-5883-33237dd30bea@intel.com> Date: Tue, 26 Jan 2021 11:01:09 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v5] app/testpmd: fix setting maximum packet length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/26/2021 3:45 AM, Lance Richardson wrote: > On Mon, Jan 25, 2021 at 7:44 PM Ferruh Yigit wrote: >> >>>> + if (rx_offloads != port->dev_conf.rxmode.offloads) { >>>> + uint16_t qid; >>>> + >>>> + port->dev_conf.rxmode.offloads = rx_offloads; >>>> + >>>> + /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */ >>>> + for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) { >>>> + if (on) >>>> + port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; >>>> + else >>>> + port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; >>>> + } >>> >>> Is it correct to set per-queue offloads that aren't advertised by the PMD >>> as supported in rx_queue_offload_capa? >>> >> >> 'port->rx_conf[]' is testpmd struct, and 'port->dev_conf.rxmode.offloads' values >> are reflected to 'port->rx_conf[].offloads' for all queues. >> >> We should set the offload in 'port->rx_conf[].offloads' if it is set in >> 'port->dev_conf.rxmode.offloads'. >> >> If a port has capability for 'JUMBO_FRAME', 'port->rx_conf[].offloads' can have >> it. And the port level capability is already checked above. >> > > I'm still not 100% clear about the per-queue offload question. > > With this patch, and jumbo max packet size configured (on the command > line in this case), I see: > > testpmd> show port 0 rx_offload configuration > Rx Offloading Configuration of port 0 : > Port : JUMBO_FRAME > Queue[ 0] : JUMBO_FRAME > > testpmd> show port 0 rx_offload capabilities > Rx Offloading Capabilities of port 0 : > Per Queue : > Per Port : VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM TCP_LRO > OUTER_IPV4_CKSUM VLAN_FILTER VLAN_EXTEND JUMBO_FRAME SCATTER TIMESTAMP > KEEP_CRC OUTER_UDP_CKSUM RSS_HASH > The port level offload is applied to all queues on the port, testpmd config structure reflects this logic in implementation. If Rx offload X is set for a port, it is set for all Rx queue offloads, this is not new behavior and not related to this patch. In the ethdev, lets assume X & Y are port level offloads, after X, Y are set via 'rte_eth_dev_configure()' if user calls 'rte_eth_rx_queue_setup()' with X & Y offload, this is a valid call and API will return success, since those offloads already enabled in port level means they are enabled for all queues. Because of above ethdev behavior, testpmd keeps all enabled port level offload in the queue level offload too, and display them as enabled offloads for the queue. To request a queue specific offload, it is added to specific queue's config before calling queue setup. Lets say that queue specific offload is Z, after setup testpmd config struct will show that specific queue has X, Y & Z offloads. I hope it is more clear now. > Yet if I configure a jumbo MTU starting with standard max packet size, > jumbo is only enabled at the port level: > testpmd> port config mtu 0 9000 > testpmd> port start all > > testpmd> show port 0 rx_offload configuration > Rx Offloading Configuration of port 0 : > Port : JUMBO_FRAME > Queue[ 0] : > > It still seems odd for a per-queue offload to be enabled on a PMD that > doesn't support per-queue receive offloads. > "port config mtu" should take queue offloads into account, it looks wrong right now.