From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3FE3AA0C48; Tue, 13 Jul 2021 10:18:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E570B411BD; Tue, 13 Jul 2021 10:18:24 +0200 (CEST) Received: from eva.fit.vutbr.cz (eva.fit.vutbr.cz [147.229.176.14]) by mails.dpdk.org (Postfix) with ESMTP id 06A524069E for ; Tue, 13 Jul 2021 10:18:23 +0200 (CEST) Received: from roundcube.fit.vutbr.cz (spytihnev.fit.vutbr.cz [IPv6:2001:67c:1220:809:0:0:93e5:9e2]) (authenticated bits=0) by eva.fit.vutbr.cz (8.16.1/8.16.1) with ESMTPSA id 16D8IJVl001872 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Tue, 13 Jul 2021 10:18:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stud.fit.vutbr.cz; s=studfit; t=1626164300; bh=M+PYL9ADftK3Zwa/zmyioaRThODYXpFYlnwxPfjWWV0=; h=Date:From:To:Cc:Subject:In-Reply-To:References; b=o26jBV89B8LsFuodntDbt0cuWbXqy8IvQ7AT0S3/u+2Y7tKss1gEErTwDn76tYVMX wSAazAvuq3XlAGKsra+Lz1DRfdaQqkSRc/jI/zrCCndGMKu/Mpib0f3za/RNTSvstE xfYGA1LnUgXJao60UpZcYapxjRFL6jyMBkyR3Kus= MIME-Version: 1.0 Date: Tue, 13 Jul 2021 10:18:19 +0200 From: =?UTF-8?Q?Havl=C3=ADk_Martin?= To: "Min Hu (Connor)" Cc: dev@dpdk.org, Matan Azrad , Chas3 , Shahaf Shuler , Slava Ovsiienko , Jan Viktorin , Ori Kam In-Reply-To: References: <20210622092531.73112-1-xhavli56@stud.fit.vutbr.cz> <20210622092531.73112-4-xhavli56@stud.fit.vutbr.cz> <5f2cd5e6-1c46-227c-7ab1-5a2a34eb337d@huawei.com> <01bd8d8fcd364584fecdea3bc70abd1c@stud.fit.vutbr.cz> <20210707175420.4d81fdaf@coaster.localdomain> <20210711234537.6d2b14c4@tanguero.localdomain> User-Agent: Roundcube Webmail Message-ID: <75196e3aee88a5347e597c0ee6fb8bf6@stud.fit.vutbr.cz> X-Sender: xhavli56@stud.fit.vutbr.cz Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH 3/3] net/bonding: start ethdev prior to setting 8023ad flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Dne 2021-07-12 15:07, Ori Kam napsal: > Hi Jan, > >> -----Original Message----- >> From: Jan Viktorin >> Sent: Monday, July 12, 2021 12:46 AM >> >> On Sun, 11 Jul 2021 08:49:18 +0000 >> Ori Kam wrote: >> >> > Hi Jan, >> >> Hi Ori, >> >> > >> > >> > > -----Original Message----- >> > > From: dev On Behalf Of Jan Viktorin >> > > Sent: Wednesday, July 7, 2021 6:54 PM >> > > >> > > On Sun, 4 Jul 2021 15:18:01 +0000 >> > > Matan Azrad wrote: >> > > >> > > > From: Havlík Martin >> > > > > Dne 2021-06-23 09:04, Min Hu (Connor) napsal: >> > > > > > 在 2021/6/22 17:25, Martin Havlik 写道: >> > > > > >> When dedicated queues are enabled, mlx5 PMD fails to install >> > > > > >> RTE Flows if the underlying ethdev is not started: >> > > > > >> bond_ethdev_8023ad_flow_set(267) - >> > > bond_ethdev_8023ad_flow_set: >> > > > > port >> > > > > >> not started (slave_port=0 queue_id=1) >> > > > > >> >> > > > > > Why mlx5 PMD doing flow create relys on port started ? >> > > > > > I noticed other PMDs did not has that reliance. >> > > > > > >> > > > > After looking into it, I really can't answer this mlx5 centered >> > > > > question. Closest related info we found so far is the 5th point >> > > > > in https://doc.dpdk.org/guides/prog_guide/rte_flow.html#caveats >> > > > > but it only specifies it's the application's responsibility and >> > > > > that flow rules are assumed invalid after port >> > > > > stop/close/restart but doesn't say anything about > > > > > rule create - start> vs where >> > > > > the former is the point of failure on mlx5. I'm addressing the >> > > > > maintainers for mlx5, who might know a bit more on the topic. >> > > > >> > > >> > > Hello Matan, >> > > >> > > > From rte_ethdev.h >> > > > >> > > > * Please note that some configuration is not stored between calls >> > > > to >> > > > * rte_eth_dev_stop()/rte_eth_dev_start(). The following >> > > > configuration will >> > > > * be retained: >> > > > * >> > > > * - MTU >> > > > * - flow control settings >> > > > * - receive mode configuration (promiscuous mode, all-multicast >> > > > mode, >> > > > * hardware checksum mode, RSS/VMDQ settings etc.) >> > > > * - VLAN filtering configuration >> > > > * - default MAC address >> > > > * - MAC addresses supplied to MAC address array >> > > > * - flow director filtering mode (but not filtering rules) >> > > > * - NIC queue statistics mappings >> > > >> > > just after this section, you can find the following statement: >> > > >> > > * Any other configuration will not be stored and will need to be >> > > re-entered >> > > * before a call to rte_eth_dev_start(). >> > > >> > > It is not very clear how is this exactly related to flows (and this >> > > applies for all the quoted section, I think) but at least it can be used as a >> counter argument. >> > > >> > I agree the doc is not clear, as I see it flows are not part of >> > configuration, at least not when we are talking about rte_flow. >> >> Agree. >> >> > >> > > > >> > > > >> > > > Mlx5 assumes flows are allowed to be configured only after >> > > > rte_eth_dev_start(). Before start \ after stop - no flow is valid >> > > > anymore. >> > > >> > > I believe that this discussion is not about validity of flows. Let >> > > the flows be invalid after calling to rte_eth_dev_stop(). This is >> > > OK, flows must be recreated and the bonding driver works this way. >> > > But why not *before start*? >> > Think about it this way by changing the configuration you may create >> > invalid flows, for example, you can only change the number of queues >> > after port stop, so if you create a flow with jump to queue 3 and then >> > you remove queue 3 then, the flow that is cached is not valid anymore. >> > This goes for other configuration that may affect the validity of a flow. >> >> This is a valid argument, of course. The thing is whether it is a >> responsibility >> of the PMD to take care of those corner cases or if this is up to the >> application developer. If we respect the fact that calling to stop >> invalidates all >> flows then when you do: >> >> > port stop 0 >> > flow create 0 ingress pattern ... / end actions queue 6 / end > >> port config >> rxq 3 > port start 0 >> >> it's clear that something is really wrong with the caller/user. I >> would say that >> this is an application bug. I would expect that you first reconfigure >> port and >> after that you modify flows. It seems quite logical and intuitive, >> doesn't it? >> > > I agree the use case I described is bad programming, but the same logic > can be > applied on stopping the port why flows should be deleted? > And even more important on some HW and some flows can only be inserted > after > start since there is no real traffic gate, basically the start means > adding the flows > if a flow is in the HW it is working, adding a gate will result in PPS > degradation. > >> Anyway, the PMD can possibly catch this rxq inconsistency but I can >> imagine >> that there are more complicated sitations than just changing count of >> queues. Any idea for a more complex real case? >> > > Some of the resource may be created only after start, > But I don't have a very good example now. > >> Martin Havlik, can you please check how ixgbe and i40e behave in the >> case >> above with "rxq change after flow created"? >> Both PMDs do not raise any errors in the mentioned case. The packets that should go to the non-existent queue are dropped at some point in the processing (from what I could see and understand). I'm glad for the discussion that is taking place but I feel we have strayed from the original reason of it. That being the inability to enable dedicated queues for bonding mode 8023AD on mlx5. We have concluded that flow creation has to be done after port/device start. So moving the function call to create the flow after device start should solve the issue (https://mails.dpdk.org/archives/dev/2021-June/212210.html). From my testing, flow create after start is not a problem on Intel PMDs. I'm turning to you, Connor, as you have made the original question on this. Is the patch I presented applicable? Martin >> > >> > > Does somebody know how other drivers behaves in this situation? (We >> > > know and can check for Intel, there it does not seem to be an >> > > issue.) >> > > >> > > By the way, the mlx5 behaviour opens a (probably short) time window >> > > between starting of a port and configuation of filtering flows. You >> > > may want to start the port with thousands of flows that apply just >> > > when the port starts (not after, that's late). This may introduce >> > > glitches in filtering and measuring of traffic (well, it is a question how >> serious issue could it be...). >> > > >> > Agree but this is always true nothing is done is zero time and even if >> > it was the insertion is not in zero time, (assuming that even if the >> > flows are stored by the PMD until start they still will not all be >> > inserted in the same time) >> >> Sorry, I probably do not understand well. Yes, creation of a flow >> would never >> be zero time, agree. But if I create flows before the port starts, >> than I do not >> have to care too much about how long the creation takes. Because the >> card is >> expected to be configured already before pushing the "start button". >> >> Maybe, you assume that the created flows would first be cached inside >> the >> PMD and when the port is finally started, it than transparently write >> the >> queues to the HW. But this is not what I was talking about, that's >> maybe even >> worse because you are hiding such behaviour from users... >> >> (I do not know the exact mlx5 implementation details, so my answer can >> miss something, sorry for that.) >> > > You are correct this is HW/PMD implementation issue, in case of Nvidia > we must > cache the flows and only insert it after start. > Since RTE flow is generic and HW is not generic sometimes the PMD needs > to > translate this difference and in this gray area there is a game between > best API best performance understanding what the user really wants. > This is why the doc should be very clear on what is the expected. > but this is also the reason why such document is very hard to agree on. > > Ori >> > >> > > This matters for the bonding case as well, doesn't it?. It is not >> > > desirable to accidently omit a packet that was received by primary >> > > ingress logic instead of being redirected into the dedicated queue. >> > > >> > > Are there any chances that for mlx5 it would be possible to insert >> > > flow rules before calling rte_eth_dev_start? Anyway, the behaviour >> > > should be specified and documented in DPDK more precisely to avoid >> > > such uncertainty in the future. >> > > >> > I agree the documentation should be fixed. >> >> +1 >> >> Jan >> >> > >> > Ori >> > > Jan >> > > >> > > > >> > > > Matan >> > > > >> > > > > >> Signed-off-by: Martin Havlik >> > > > > >> Cc: Jan Viktorin >> > > > > >> --- >> > > > > >> drivers/net/bonding/rte_eth_bond_pmd.c | 26 >> > > > > >> ++++++++++++++++++-------- >> > > > > >> 1 file changed, 18 insertions(+), 8 deletions(-) >> > > > > >> >> > > > > >> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c >> > > > > >> b/drivers/net/bonding/rte_eth_bond_pmd.c >> > > > > >> index a6755661c..fea3bc537 100644 >> > > > > >> --- a/drivers/net/bonding/rte_eth_bond_pmd.c >> > > > > >> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c >> > > > > >> @@ -1818,25 +1818,35 @@ slave_configure(struct rte_eth_dev >> > > > > >> *bonded_eth_dev, >> > > > > >> >> > > > > >> rte_flow_destroy(slave_eth_dev->data->port_id, >> > > > > >> >> > > > > >> internals- >> > > > > >mode4.dedicated_queues.flow[slave_eth_dev->data->port_id], >> > > > > >> &flow_error); >> > > > > >> + } >> > > > > >> + /* Start device */ >> > > > > >> + errval = rte_eth_dev_start(slave_eth_dev->data->port_id); >> > > > > >> + if (errval != 0) { >> > > > > >> + RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, >> > > > > >> + err >> > > > > >> (%d)", >> > > > > >> + slave_eth_dev->data->port_id, >> > > > > >> errval); >> > > > > >> + return -1; >> > > > > >> + } >> > > > > >> + >> > > > > >> + if (internals->mode == BONDING_MODE_8023AD && >> > > > > >> + >> > > > > >> + internals->mode4.dedicated_queues.enabled >> > > > > >> == 1) >> > > > > >> + { >> > > > > >> errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev, >> > > > > >> slave_eth_dev->data->port_id); >> > > > > >> if (errval != 0) { >> > > > > >> RTE_BOND_LOG(ERR, >> > > > > >> "bond_ethdev_8023ad_flow_set: >> > > > > >> port=%d, err (%d)", slave_eth_dev->data->port_id, errval); >> > > > > >> + >> > > > > >> + errval = >> > > > > >> rte_eth_dev_stop(slave_eth_dev->data->port_id); >> > > > > >> + if (errval < 0) { >> > > > > >> + RTE_BOND_LOG(ERR, >> > > > > >> + "rte_eth_dev_stop: >> > > > > >> + port=%d, >> > > > > >> err (%d)", >> > > > > >> + >> > > > > >> slave_eth_dev->data->port_id, errval); >> > > > > >> + } >> > > > > >> return errval; >> > > > > >> } >> > > > > >> } >> > > > > >> - /* Start device */ >> > > > > >> - errval = rte_eth_dev_start(slave_eth_dev->data->port_id); >> > > > > >> - if (errval != 0) { >> > > > > >> - RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err >> > > > > >> (%d)", >> > > > > >> - slave_eth_dev->data->port_id, >> > > > > >> errval); >> > > > > >> - return -1; >> > > > > >> - } >> > > > > >> - >> > > > > >> /* If RSS is enabled for bonding, synchronize RETA */ >> > > > > >> if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & >> > > > > >> ETH_MQ_RX_RSS) { >> > > > > >> int i; >> > > > > >> >> >