From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC06EA09E4; Thu, 28 Jan 2021 22:36:48 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 79FB940682; Thu, 28 Jan 2021 22:36:48 +0100 (CET) Received: from mail-oi1-f171.google.com (mail-oi1-f171.google.com [209.85.167.171]) by mails.dpdk.org (Postfix) with ESMTP id 1627C4067A for ; Thu, 28 Jan 2021 22:36:48 +0100 (CET) Received: by mail-oi1-f171.google.com with SMTP id h192so7680467oib.1 for ; Thu, 28 Jan 2021 13:36:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rEeH6kGPFwnKtvNsHR3+1DTRlKQq21iRyRv6k47/jBg=; b=PPxL+yh6c3qPQ1Fs4Co5r9aNQ37btN61Gx2GzzqOdFrr4N/pyRHqXaWyGMj8hdY4d5 9Qah8beWKQRnWIV8POxTA9YBpSTBfpRveCOsZttNBj+MdRt29Iibm7OShDvPGYpIy1Na nUxPmxWFcqLnzS3gnta1XK5emxvOXNA4RwOb0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rEeH6kGPFwnKtvNsHR3+1DTRlKQq21iRyRv6k47/jBg=; b=rbTmuvd/zbZpFnhpZonLyjuf+8VhBalgW4WhrgrW2LeAY8D0EzRHFQYfFK1Yd0FFeh 4S4FIWjD/rDijEYo0JM4Rk2HiazrLTs9UKgQ3YMQGcV9rpMsCVq8j0SoRnpVEYMseZYS i/sHOF5xRumR5MSkKPLmudHa96Q0bn7bZApyQp3qczQWV2VzDy4Ve6SuwFOJiW5qWEnF UZc4NN7bxoRV5dPrbNsISkc6Ket8tj323xbc5unkVjFTSJpNp6YPDNx1HwnunkloAcDn AP73kNbfAENwroTtu73ivQJRC0KS5dQQTN6XF102llxSB7LB1WgVU2mzf3u8C5HLK0hr A5ig== X-Gm-Message-State: AOAM531iObWDk46M73DUFItO/9wtbnHeK70EadcwJpudql31grReNmpg SK5InAaPgwAdr1c83g3f2r3L9xi/6AgZ77eK+BUelg== X-Google-Smtp-Source: ABdhPJytagKFVyV3pMkUBPXRZY8eMViYpqoYZfTj7l8mwIFdLRQL4GxMZCyJkhsPlOxJQVndI1W4RO4/b5+fjoBzXDU= X-Received: by 2002:aca:3b06:: with SMTP id i6mr843528oia.81.1611869807277; Thu, 28 Jan 2021 13:36:47 -0800 (PST) MIME-Version: 1.0 References: <20210125083202.38267-1-stevex.yang@intel.com> <20210125181548.2713326-1-ferruh.yigit@intel.com> <1efbcf83-8b7f-682c-7494-44cacef55e36@intel.com> <21284291-2b31-3518-5883-33237dd30bea@intel.com> In-Reply-To: <21284291-2b31-3518-5883-33237dd30bea@intel.com> From: Lance Richardson Date: Thu, 28 Jan 2021 16:36:36 -0500 Message-ID: To: Ferruh Yigit Cc: Wenzhuo Lu , Xiaoyun Li , Bernard Iremonger , Steve Yang , dev@dpdk.org, stable@dpdk.org, oulijun@huawei.com, wisamm@mellanox.com, lihuisong@huawei.com Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="0000000000004e0d1e05b9fcafc9" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-dev] [PATCH v5] app/testpmd: fix setting maximum packet length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" --0000000000004e0d1e05b9fcafc9 Content-Type: text/plain; charset="UTF-8" On Tue, Jan 26, 2021 at 6:01 AM Ferruh Yigit wrote: > > On 1/26/2021 3:45 AM, Lance Richardson wrote: > > On Mon, Jan 25, 2021 at 7:44 PM Ferruh Yigit wrote: > >> > >>>> + if (rx_offloads != port->dev_conf.rxmode.offloads) { > >>>> + uint16_t qid; > >>>> + > >>>> + port->dev_conf.rxmode.offloads = rx_offloads; > >>>> + > >>>> + /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */ > >>>> + for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) { > >>>> + if (on) > >>>> + port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > >>>> + else > >>>> + port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; > >>>> + } > >>> > >>> Is it correct to set per-queue offloads that aren't advertised by the PMD > >>> as supported in rx_queue_offload_capa? > >>> > >> > >> 'port->rx_conf[]' is testpmd struct, and 'port->dev_conf.rxmode.offloads' values > >> are reflected to 'port->rx_conf[].offloads' for all queues. > >> > >> We should set the offload in 'port->rx_conf[].offloads' if it is set in > >> 'port->dev_conf.rxmode.offloads'. > >> > >> If a port has capability for 'JUMBO_FRAME', 'port->rx_conf[].offloads' can have > >> it. And the port level capability is already checked above. > >> > > > > I'm still not 100% clear about the per-queue offload question. > > > > With this patch, and jumbo max packet size configured (on the command > > line in this case), I see: > > > > testpmd> show port 0 rx_offload configuration > > Rx Offloading Configuration of port 0 : > > Port : JUMBO_FRAME > > Queue[ 0] : JUMBO_FRAME > > > > testpmd> show port 0 rx_offload capabilities > > Rx Offloading Capabilities of port 0 : > > Per Queue : > > Per Port : VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM TCP_LRO > > OUTER_IPV4_CKSUM VLAN_FILTER VLAN_EXTEND JUMBO_FRAME SCATTER TIMESTAMP > > KEEP_CRC OUTER_UDP_CKSUM RSS_HASH > > > > The port level offload is applied to all queues on the port, testpmd config > structure reflects this logic in implementation. > If Rx offload X is set for a port, it is set for all Rx queue offloads, this is > not new behavior and not related to this patch. > OK, is this purely for display purposes within testpmd? I ask because it appears that all PMDs supporting per-queue offload configuration already take care of combining port-level and per-queue offloads within their tx_queue_setup()/rx_queue_setup() functions and then track the combined set of offloads within a per-queue field, e.g. this line is common to e1000/i40e/ionic/ixgbe/octeontx2/thunderx/txgbe rx_queue_setup() implementations: offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; And rte_ethdev.h says: No need to repeat any bit in rx_conf->offloads which has already been enabled in rte_eth_dev_configure() at port level. An offloading enabled at port level can't be disabled at queue level. Which I suppose confirms that if testpmd is combining per-port and per- queue offloads, it's just for the purposes of testpmd. Apologies for worrying at this even more, I just wanted to be sure that I understand what the PMD is expected to do. Regards, Lance --0000000000004e0d1e05b9fcafc9--