* [PATCH 19.11] net/bonding: fix flow flush order on close
@ 2022-11-14 11:13 Ivan Malov
2022-11-15 8:19 ` Christian Ehrhardt
0 siblings, 1 reply; 2+ messages in thread
From: Ivan Malov @ 2022-11-14 11:13 UTC (permalink / raw)
To: stable; +Cc: Christian Ehrhardt, Andrew Rybchenko
[ upstream commit df810d1b6e31a3e25085a6abae3be119af3034c1 ]
The current code first removes all back-end devices of
the bonded device and then invokes flush operation to
remove flows in such back-end devices, which makes no
sense. Fix that by re-ordering the steps accordingly.
Fixes: 49dad9028e2a ("net/bonding: support flow API")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 4a0f6e1b8..7e79bac42 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -2115,6 +2115,10 @@ bond_ethdev_close(struct rte_eth_dev *dev)
struct rte_flow_error ferror;
RTE_BOND_LOG(INFO, "Closing bonded device %s", dev->device->name);
+
+ /* Flush flows in all back-end devices before removing them */
+ bond_flow_ops.flush(dev, &ferror);
+
while (internals->slave_count != skipped) {
uint16_t port_id = internals->slaves[skipped].port_id;
@@ -2127,7 +2131,6 @@ bond_ethdev_close(struct rte_eth_dev *dev)
skipped++;
}
}
- bond_flow_ops.flush(dev, &ferror);
bond_ethdev_free_queues(dev);
rte_bitmap_reset(internals->vlan_filter_bmp);
}
--
2.30.2
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH 19.11] net/bonding: fix flow flush order on close
2022-11-14 11:13 [PATCH 19.11] net/bonding: fix flow flush order on close Ivan Malov
@ 2022-11-15 8:19 ` Christian Ehrhardt
0 siblings, 0 replies; 2+ messages in thread
From: Christian Ehrhardt @ 2022-11-15 8:19 UTC (permalink / raw)
To: Ivan Malov; +Cc: stable, Andrew Rybchenko
On Mon, Nov 14, 2022 at 12:13 PM Ivan Malov <ivan.malov@oktetlabs.ru> wrote:
>
> [ upstream commit df810d1b6e31a3e25085a6abae3be119af3034c1 ]
Thanks, applied to the WIP branch - expect it to be part of 19.11.14
unless some builds stumble over it.
> The current code first removes all back-end devices of
> the bonded device and then invokes flush operation to
> remove flows in such back-end devices, which makes no
> sense. Fix that by re-ordering the steps accordingly.
>
> Fixes: 49dad9028e2a ("net/bonding: support flow API")
>
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> drivers/net/bonding/rte_eth_bond_pmd.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 4a0f6e1b8..7e79bac42 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -2115,6 +2115,10 @@ bond_ethdev_close(struct rte_eth_dev *dev)
> struct rte_flow_error ferror;
>
> RTE_BOND_LOG(INFO, "Closing bonded device %s", dev->device->name);
> +
> + /* Flush flows in all back-end devices before removing them */
> + bond_flow_ops.flush(dev, &ferror);
> +
> while (internals->slave_count != skipped) {
> uint16_t port_id = internals->slaves[skipped].port_id;
>
> @@ -2127,7 +2131,6 @@ bond_ethdev_close(struct rte_eth_dev *dev)
> skipped++;
> }
> }
> - bond_flow_ops.flush(dev, &ferror);
> bond_ethdev_free_queues(dev);
> rte_bitmap_reset(internals->vlan_filter_bmp);
> }
> --
> 2.30.2
>
--
Christian Ehrhardt
Senior Staff Engineer, Ubuntu Server
Canonical Ltd
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-11-15 8:19 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-14 11:13 [PATCH 19.11] net/bonding: fix flow flush order on close Ivan Malov
2022-11-15 8:19 ` Christian Ehrhardt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).