* patch 'doc: fix configuration in baseband 5GNR driver guide' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'event/dlb2: remove superfluous memcpy' " Xueming Li
` (122 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Hernan Vargas; +Cc: Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4e8d39a298ee9a20d8602b4dc57b415ac907b681
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4e8d39a298ee9a20d8602b4dc57b415ac907b681 Mon Sep 17 00:00:00 2001
From: Hernan Vargas <hernan.vargas@intel.com>
Date: Thu, 8 Feb 2024 08:50:33 -0800
Subject: [PATCH] doc: fix configuration in baseband 5GNR driver guide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a46c1225141d2c099c70d98b38f1f9c20307ff6f ]
flr_timeout was removed from the code a while ago, updating doc.
Fix minor typo in 5GNR example.
Fixes: 2d4306438c92 ("baseband/fpga_5gnr_fec: add configure function")
Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
doc/guides/bbdevs/fpga_5gnr_fec.rst | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/doc/guides/bbdevs/fpga_5gnr_fec.rst b/doc/guides/bbdevs/fpga_5gnr_fec.rst
index 956dd6bed5..99fc936829 100644
--- a/doc/guides/bbdevs/fpga_5gnr_fec.rst
+++ b/doc/guides/bbdevs/fpga_5gnr_fec.rst
@@ -100,7 +100,6 @@ parameters defined in ``rte_fpga_5gnr_fec_conf`` structure:
uint8_t dl_bandwidth;
uint8_t ul_load_balance;
uint8_t dl_load_balance;
- uint16_t flr_time_out;
};
- ``pf_mode_en``: identifies whether only PF is to be used, or the VFs. PF and
@@ -126,10 +125,6 @@ parameters defined in ``rte_fpga_5gnr_fec_conf`` structure:
If all hardware queues exceeds the watermark, no code blocks will be
streamed in from UL/DL code block FIFO.
-- ``flr_time_out``: specifies how many 16.384us to be FLR time out. The
- time_out = flr_time_out x 16.384us. For instance, if you want to set 10ms for
- the FLR time out then set this setting to 0x262=610.
-
An example configuration code calling the function ``rte_fpga_5gnr_fec_configure()`` is shown
below:
@@ -154,7 +149,7 @@ below:
/* setup FPGA PF */
ret = rte_fpga_5gnr_fec_configure(info->dev_name, &conf);
TEST_ASSERT_SUCCESS(ret,
- "Failed to configure 4G FPGA PF for bbdev %s",
+ "Failed to configure 5GNR FPGA PF for bbdev %s",
info->dev_name);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.258058792 +0800
+++ 0002-doc-fix-configuration-in-baseband-5GNR-driver-guide.patch 2024-04-13 20:43:04.897754062 +0800
@@ -1 +1 @@
-From a46c1225141d2c099c70d98b38f1f9c20307ff6f Mon Sep 17 00:00:00 2001
+From 4e8d39a298ee9a20d8602b4dc57b415ac907b681 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a46c1225141d2c099c70d98b38f1f9c20307ff6f ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'event/dlb2: remove superfluous memcpy' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
2024-04-13 12:48 ` patch 'doc: fix configuration in baseband 5GNR driver guide' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'test/event: fix crash in Tx adapter freeing' " Xueming Li
` (121 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Morten Brørup; +Cc: Stephen Hemminger, Abdullah Sevincer, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=524c60f42279c9528ba1a61ca4b517a4095726c4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 524c60f42279c9528ba1a61ca4b517a4095726c4 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>
Date: Mon, 16 Jan 2023 14:07:22 +0100
Subject: [PATCH] event/dlb2: remove superfluous memcpy
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c1b086d2abdb7773700b4d216f323bd9278ace7a ]
Copying with the same src and dst address has no effect; removed to
avoid compiler warning with decorated rte_memcpy.
Fixes: e7c9971a857a ("event/dlb2: add probe-time hardware init")
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Abdullah Sevincer <abdullah.sevincer@intel.com>
---
drivers/event/dlb2/dlb2.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 050ace0904..5044cb17ef 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -160,7 +160,6 @@ static int
dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
{
struct dlb2_hw_dev *handle = &dlb2->qm_instance;
- struct dlb2_hw_resource_info *dlb2_info = &handle->info;
int num_ldb_ports;
int ret;
@@ -222,8 +221,6 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
handle->info.hw_rsrc_max.reorder_window_size =
dlb2->hw_rsrc_query_results.num_hist_list_entries;
- rte_memcpy(dlb2_info, &handle->info.hw_rsrc_max, sizeof(*dlb2_info));
-
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.290679049 +0800
+++ 0003-event-dlb2-remove-superfluous-memcpy.patch 2024-04-13 20:43:04.897754062 +0800
@@ -1 +1 @@
-From c1b086d2abdb7773700b4d216f323bd9278ace7a Mon Sep 17 00:00:00 2001
+From 524c60f42279c9528ba1a61ca4b517a4095726c4 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c1b086d2abdb7773700b4d216f323bd9278ace7a ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 271bbce54a..628ddef649 100644
+index 050ace0904..5044cb17ef 100644
@@ -26 +28 @@
-@@ -163,7 +163,6 @@ static int
+@@ -160,7 +160,6 @@ static int
@@ -34 +36 @@
-@@ -225,8 +224,6 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
+@@ -222,8 +221,6 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test/event: fix crash in Tx adapter freeing' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
2024-04-13 12:48 ` patch 'doc: fix configuration in baseband 5GNR driver guide' " Xueming Li
2024-04-13 12:48 ` patch 'event/dlb2: remove superfluous memcpy' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'eventdev: improve Doxygen comments on configure struct' " Xueming Li
` (120 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Ganapati Kundapura; +Cc: Pavan Nikhilesh, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7721c9f49898ed88162fdf4efb22986cad8018fb
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7721c9f49898ed88162fdf4efb22986cad8018fb Mon Sep 17 00:00:00 2001
From: Ganapati Kundapura <ganapati.kundapura@intel.com>
Date: Mon, 26 Feb 2024 02:30:03 -0600
Subject: [PATCH] test/event: fix crash in Tx adapter freeing
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1f85467fcaf03c6b0d879614ee18f9a98fe9e9e6 ]
Uninitialized mbufs are enqueued to eventdev which causes segfault
on freeing the mbuf in Tx adapter.
Fixed by initializing mbufs before enqueuing to eventdev.
Fixes: 46cf97e4bbfa ("eventdev: add test for eth Tx adapter")
Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_event_eth_tx_adapter.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/app/test/test_event_eth_tx_adapter.c b/app/test/test_event_eth_tx_adapter.c
index dbd22f6800..482b8e69e3 100644
--- a/app/test/test_event_eth_tx_adapter.c
+++ b/app/test/test_event_eth_tx_adapter.c
@@ -484,6 +484,10 @@ tx_adapter_service(void)
int internal_port;
uint32_t cap;
+ /* Initialize mbufs */
+ for (i = 0; i < RING_SIZE; i++)
+ rte_pktmbuf_reset(&bufs[i]);
+
memset(&dev_conf, 0, sizeof(dev_conf));
err = rte_event_eth_tx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
&cap);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.327714801 +0800
+++ 0004-test-event-fix-crash-in-Tx-adapter-freeing.patch 2024-04-13 20:43:04.897754062 +0800
@@ -1 +1 @@
-From 1f85467fcaf03c6b0d879614ee18f9a98fe9e9e6 Mon Sep 17 00:00:00 2001
+From 7721c9f49898ed88162fdf4efb22986cad8018fb Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1f85467fcaf03c6b0d879614ee18f9a98fe9e9e6 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'eventdev: improve Doxygen comments on configure struct' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (2 preceding siblings ...)
2024-04-13 12:48 ` patch 'test/event: fix crash in Tx adapter freeing' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'eventdev: fix Doxygen processing of vector " Xueming Li
` (119 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Pavan Nikhilesh, Jerin Jacob, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2faf71417f2c08153a844df70beae6ff68a20e5b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2faf71417f2c08153a844df70beae6ff68a20e5b Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Wed, 21 Feb 2024 10:32:15 +0000
Subject: [PATCH] eventdev: improve Doxygen comments on configure struct
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1203462c5ada21bdace88e009db5a8f17f88528a ]
General rewording and cleanup on the rte_event_dev_config structure.
Improved the wording of some sentences and created linked
cross-references out of the existing references to the dev_info
structure.
As part of the rework, fix issue with how single-link port-queue pairs
were counted in the rte_event_dev_config structure. This did not match
the actual implementation and, if following the documentation, certain
valid port/queue configurations would have been impossible to configure.
Fix this by changing the documentation to match the implementation
Bugzilla ID: 1368
Fixes: 75d113136f38 ("eventdev: express DLB/DLB2 PMD constraints")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
lib/eventdev/rte_eventdev.h | 61 ++++++++++++++++++++++---------------
1 file changed, 37 insertions(+), 24 deletions(-)
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a3e2f9f862 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -515,9 +515,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
struct rte_event_dev_config {
uint32_t dequeue_timeout_ns;
/**< rte_event_dequeue_burst() timeout on this device.
- * This value should be in the range of *min_dequeue_timeout_ns* and
- * *max_dequeue_timeout_ns* which previously provided in
- * rte_event_dev_info_get()
+ * This value should be in the range of @ref rte_event_dev_info.min_dequeue_timeout_ns and
+ * @ref rte_event_dev_info.max_dequeue_timeout_ns returned by
+ * @ref rte_event_dev_info_get()
* The value 0 is allowed, in which case, default dequeue timeout used.
* @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
*/
@@ -525,40 +525,53 @@ struct rte_event_dev_config {
/**< In a *closed system* this field is the limit on maximum number of
* events that can be inflight in the eventdev at a given time. The
* limit is required to ensure that the finite space in a closed system
- * is not overwhelmed. The value cannot exceed the *max_num_events*
- * as provided by rte_event_dev_info_get().
- * This value should be set to -1 for *open system*.
+ * is not exhausted.
+ * The value cannot exceed @ref rte_event_dev_info.max_num_events
+ * returned by rte_event_dev_info_get().
+ *
+ * This value should be set to -1 for *open systems*, that is,
+ * those systems returning -1 in @ref rte_event_dev_info.max_num_events.
+ *
+ * @see rte_event_port_conf.new_event_threshold
*/
uint8_t nb_event_queues;
/**< Number of event queues to configure on this device.
- * This value cannot exceed the *max_event_queues* which previously
- * provided in rte_event_dev_info_get()
+ * This value *includes* any single-link queue-port pairs to be used.
+ * This value cannot exceed @ref rte_event_dev_info.max_event_queues +
+ * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+ * returned by rte_event_dev_info_get().
+ * The number of non-single-link queues i.e. this value less
+ * *nb_single_link_event_port_queues* in this struct, cannot exceed
+ * @ref rte_event_dev_info.max_event_queues
*/
uint8_t nb_event_ports;
/**< Number of event ports to configure on this device.
- * This value cannot exceed the *max_event_ports* which previously
- * provided in rte_event_dev_info_get()
+ * This value *includes* any single-link queue-port pairs to be used.
+ * This value cannot exceed @ref rte_event_dev_info.max_event_ports +
+ * @ref rte_event_dev_info.max_single_link_event_port_queue_pairs
+ * returned by rte_event_dev_info_get().
+ * The number of non-single-link ports i.e. this value less
+ * *nb_single_link_event_port_queues* in this struct, cannot exceed
+ * @ref rte_event_dev_info.max_event_ports
*/
uint32_t nb_event_queue_flows;
- /**< Number of flows for any event queue on this device.
- * This value cannot exceed the *max_event_queue_flows* which previously
- * provided in rte_event_dev_info_get()
+ /**< Max number of flows needed for a single event queue on this device.
+ * This value cannot exceed @ref rte_event_dev_info.max_event_queue_flows
+ * returned by rte_event_dev_info_get()
*/
uint32_t nb_event_port_dequeue_depth;
- /**< Maximum number of events can be dequeued at a time from an
- * event port by this device.
- * This value cannot exceed the *max_event_port_dequeue_depth*
- * which previously provided in rte_event_dev_info_get().
+ /**< Max number of events that can be dequeued at a time from an event port on this device.
+ * This value cannot exceed @ref rte_event_dev_info.max_event_port_dequeue_depth
+ * returned by rte_event_dev_info_get().
* Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
- * @see rte_event_port_setup()
+ * @see rte_event_port_setup() rte_event_dequeue_burst()
*/
uint32_t nb_event_port_enqueue_depth;
- /**< Maximum number of events can be enqueued at a time from an
- * event port by this device.
- * This value cannot exceed the *max_event_port_enqueue_depth*
- * which previously provided in rte_event_dev_info_get().
+ /**< Maximum number of events can be enqueued at a time to an event port on this device.
+ * This value cannot exceed @ref rte_event_dev_info.max_event_port_enqueue_depth
+ * returned by rte_event_dev_info_get().
* Ignored when device is not RTE_EVENT_DEV_CAP_BURST_MODE capable.
- * @see rte_event_port_setup()
+ * @see rte_event_port_setup() rte_event_enqueue_burst()
*/
uint32_t event_dev_cfg;
/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
@@ -568,7 +581,7 @@ struct rte_event_dev_config {
* queues; this value cannot exceed *nb_event_ports* or
* *nb_event_queues*. If the device has ports and queues that are
* optimized for single-link usage, this field is a hint for how many
- * to allocate; otherwise, regular event ports and queues can be used.
+ * to allocate; otherwise, regular event ports and queues will be used.
*/
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.350252471 +0800
+++ 0005-eventdev-improve-Doxygen-comments-on-configure-struc.patch 2024-04-13 20:43:04.897754062 +0800
@@ -1 +1 @@
-From 1203462c5ada21bdace88e009db5a8f17f88528a Mon Sep 17 00:00:00 2001
+From 2faf71417f2c08153a844df70beae6ff68a20e5b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1203462c5ada21bdace88e009db5a8f17f88528a ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index 9808889625..fb1c4429f0 100644
+index ec9b02455d..a3e2f9f862 100644
@@ -32 +34 @@
-@@ -684,9 +684,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
+@@ -515,9 +515,9 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
@@ -45 +47 @@
-@@ -694,40 +694,53 @@ struct rte_event_dev_config {
+@@ -525,40 +525,53 @@ struct rte_event_dev_config {
@@ -119 +121 @@
-@@ -737,7 +750,7 @@ struct rte_event_dev_config {
+@@ -568,7 +581,7 @@ struct rte_event_dev_config {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'eventdev: fix Doxygen processing of vector struct' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (3 preceding siblings ...)
2024-04-13 12:48 ` patch 'eventdev: improve Doxygen comments on configure struct' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'eventdev/crypto: fix enqueueing' " Xueming Li
` (118 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Pavan Nikhilesh, Jerin Jacob, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e5ed46471041c6777095d01f2f4d057c1be2b3af
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e5ed46471041c6777095d01f2f4d057c1be2b3af Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Wed, 21 Feb 2024 10:32:21 +0000
Subject: [PATCH] eventdev: fix Doxygen processing of vector struct
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f5746d3fa2f9f08179878c22a0ec1f598a7f15a1 ]
The event vector struct was missing comments on two members, and also
was inadvertently creating a local variable called "__rte_aligned" in
the doxygen output.
Correct the comment markers to fix the former issue, and fix the latter
by putting "#ifdef __DOXYGEN" around the alignment constraint.
Fixes: 1cc44d409271 ("eventdev: introduce event vector capability")
Fixes: 3c838062b91f ("eventdev: introduce event vector Rx capability")
Fixes: 699155f2d4e2 ("eventdev: fix clang C++ include")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
lib/eventdev/rte_eventdev.h | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a3e2f9f862..7fd9016ca7 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1111,10 +1111,8 @@ struct rte_event_vector {
* port and queue of the mbufs in the vector
*/
struct {
- uint16_t port;
- /* Ethernet device port id. */
- uint16_t queue;
- /* Ethernet device queue id. */
+ uint16_t port; /**< Ethernet device port id. */
+ uint16_t queue; /**< Ethernet device queue id. */
};
};
/**< Union to hold common attributes of the vector array. */
@@ -1143,7 +1141,11 @@ struct rte_event_vector {
* vector array can be an array of mbufs or pointers or opaque u64
* values.
*/
+#ifndef __DOXYGEN__
} __rte_aligned(16);
+#else
+};
+#endif
/* Scheduler type definitions */
#define RTE_SCHED_TYPE_ORDERED 0
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.379822833 +0800
+++ 0006-eventdev-fix-Doxygen-processing-of-vector-struct.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From f5746d3fa2f9f08179878c22a0ec1f598a7f15a1 Mon Sep 17 00:00:00 2001
+From e5ed46471041c6777095d01f2f4d057c1be2b3af Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f5746d3fa2f9f08179878c22a0ec1f598a7f15a1 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 913fe38974..3af46864df 100644
+index a3e2f9f862..7fd9016ca7 100644
@@ -29 +31 @@
-@@ -1358,10 +1358,8 @@ struct rte_event_vector {
+@@ -1111,10 +1111,8 @@ struct rte_event_vector {
@@ -42 +44 @@
-@@ -1390,7 +1388,11 @@ struct rte_event_vector {
+@@ -1143,7 +1141,11 @@ struct rte_event_vector {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'eventdev/crypto: fix enqueueing' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (4 preceding siblings ...)
2024-04-13 12:48 ` patch 'eventdev: fix Doxygen processing of vector " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'app/crypto-perf: fix copy segment size' " Xueming Li
` (117 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Ganapati Kundapura; +Cc: Abhinandan Gujjar, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b2cd908926f9d8d9046e850dc0ca2a6494ef724b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b2cd908926f9d8d9046e850dc0ca2a6494ef724b Mon Sep 17 00:00:00 2001
From: Ganapati Kundapura <ganapati.kundapura@intel.com>
Date: Wed, 28 Feb 2024 04:39:19 -0600
Subject: [PATCH] eventdev/crypto: fix enqueueing
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f5d48ed52da03d5d3b68889e844bab59b2ffb4f0 ]
When tail pointer of circular buffer rolls over as the circular buffer
becomes full, crypto adapter is enqueueing ops beyond the size of the
circular buffer leading to segfault due to invalid ops access.
Fixed by enqueueing ops from head pointer to (size-head) number of ops
when circular buffer becomes full and the remaining ops will be flushed
in next iteration.
Fixes: 6c3c888656fc ("eventdev/crypto: fix circular buffer full case")
Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
---
lib/eventdev/rte_event_crypto_adapter.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index d46595d190..9903f96695 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -245,20 +245,28 @@ eca_circular_buffer_flush_to_cdev(struct crypto_ops_circular_buffer *bufp,
struct rte_crypto_op **ops = bufp->op_buffer;
if (*tailp > *headp)
+ /* Flush ops from head pointer to (tail - head) OPs */
n = *tailp - *headp;
else if (*tailp < *headp)
+ /* Circ buffer - Rollover.
+ * Flush OPs from head to max size of buffer.
+ * Rest of the OPs will be flushed in next iteration.
+ */
n = bufp->size - *headp;
else { /* head == tail case */
/* when head == tail,
* circ buff is either full(tail pointer roll over) or empty
*/
if (bufp->count != 0) {
- /* circ buffer is full */
- n = bufp->count;
+ /* Circ buffer - FULL.
+ * Flush OPs from head to max size of buffer.
+ * Rest of the OPS will be flushed in next iteration.
+ */
+ n = bufp->size - *headp;
} else {
- /* circ buffer is empty */
+ /* Circ buffer - Empty */
*nb_ops_flushed = 0;
- return 0; /* buffer empty */
+ return 0;
}
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.411943791 +0800
+++ 0007-eventdev-crypto-fix-enqueueing.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From f5d48ed52da03d5d3b68889e844bab59b2ffb4f0 Mon Sep 17 00:00:00 2001
+From b2cd908926f9d8d9046e850dc0ca2a6494ef724b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f5d48ed52da03d5d3b68889e844bab59b2ffb4f0 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/crypto-perf: fix copy segment size' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (5 preceding siblings ...)
2024-04-13 12:48 ` patch 'eventdev/crypto: fix enqueueing' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'app/crypto-perf: fix out-of-place mbuf " Xueming Li
` (116 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Suanming Mou; +Cc: Akhil Goyal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f0cfffc636bac0f4bf8c064a36de2cac27d4ca03
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f0cfffc636bac0f4bf8c064a36de2cac27d4ca03 Mon Sep 17 00:00:00 2001
From: Suanming Mou <suanmingm@nvidia.com>
Date: Wed, 3 Jan 2024 12:00:23 +0800
Subject: [PATCH] app/crypto-perf: fix copy segment size
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7c31d17f68f615c31811afa1830fa0e38956ffc4 ]
For the case crypto device requires headroom and tailroom,
the segment_sz in options also contains the headroom_sz
and tailroom_sz, but mbuf's data_len is user's segment_sz
without headroom_sz and tailroom_sz. That means the data
size to be copied should use user's segment_sz instead
of options->segment_sz.
This commit fixes the copy segment size calculation.
Fixes: 14864c4217ce ("test/crypto-perf: populate mbuf in latency test")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
app/test-crypto-perf/cperf_test_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index b3bf9f67e8..dbb08588ee 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -268,7 +268,7 @@ cperf_mbuf_set(struct rte_mbuf *mbuf,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
- uint32_t segment_sz = options->segment_sz;
+ uint32_t segment_sz = options->segment_sz - options->headroom_sz - options->tailroom_sz;
uint8_t *mbuf_data;
uint8_t *test_data;
uint32_t remaining_bytes = options->max_buffer_size;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.435337760 +0800
+++ 0008-app-crypto-perf-fix-copy-segment-size.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From 7c31d17f68f615c31811afa1830fa0e38956ffc4 Mon Sep 17 00:00:00 2001
+From f0cfffc636bac0f4bf8c064a36de2cac27d4ca03 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7c31d17f68f615c31811afa1830fa0e38956ffc4 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/crypto-perf: fix out-of-place mbuf size' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (6 preceding siblings ...)
2024-04-13 12:48 ` patch 'app/crypto-perf: fix copy segment size' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'app/crypto-perf: add missing op resubmission' " Xueming Li
` (115 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Suanming Mou; +Cc: Akhil Goyal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a1f1843146e243594b105d0732aeae39196eb1c6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a1f1843146e243594b105d0732aeae39196eb1c6 Mon Sep 17 00:00:00 2001
From: Suanming Mou <suanmingm@nvidia.com>
Date: Wed, 3 Jan 2024 12:00:24 +0800
Subject: [PATCH] app/crypto-perf: fix out-of-place mbuf size
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 24515c93197091437e32f35bba3f467c01633c1d ]
If crypto device requires headroom and tailroom, the mbuf
of dst in out-of-place should reserve the headroom and
tailroom as well, otherwise there will be no enough room
for dst mbuf.
Fixes: bf9d6702eca9 ("app/crypto-perf: use single mempool")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
app/test-crypto-perf/cperf_test_common.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/app/test-crypto-perf/cperf_test_common.c b/app/test-crypto-perf/cperf_test_common.c
index dbb08588ee..94d39fb177 100644
--- a/app/test-crypto-perf/cperf_test_common.c
+++ b/app/test-crypto-perf/cperf_test_common.c
@@ -226,7 +226,8 @@ cperf_alloc_common_memory(const struct cperf_options *options,
(mbuf_size * segments_nb);
params.dst_buf_offset = *dst_buf_offset;
/* Destination buffer will be one segment only */
- obj_size += max_size + sizeof(struct rte_mbuf);
+ obj_size += max_size + sizeof(struct rte_mbuf) +
+ options->headroom_sz + options->tailroom_sz;
}
*pool = rte_mempool_create_empty(pool_name,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.457471131 +0800
+++ 0009-app-crypto-perf-fix-out-of-place-mbuf-size.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From 24515c93197091437e32f35bba3f467c01633c1d Mon Sep 17 00:00:00 2001
+From a1f1843146e243594b105d0732aeae39196eb1c6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 24515c93197091437e32f35bba3f467c01633c1d ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/crypto-perf: add missing op resubmission' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (7 preceding siblings ...)
2024-04-13 12:48 ` patch 'app/crypto-perf: fix out-of-place mbuf " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'doc: fix typos in cryptodev overview' " Xueming Li
` (114 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Suanming Mou; +Cc: Anoob Joseph, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=14c38e2db1ea3f8d09969caec97facbb93600526
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 14c38e2db1ea3f8d09969caec97facbb93600526 Mon Sep 17 00:00:00 2001
From: Suanming Mou <suanmingm@nvidia.com>
Date: Mon, 15 Jan 2024 16:08:30 +0800
Subject: [PATCH] app/crypto-perf: add missing op resubmission
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 74d7c028ecf478f18cf9623210bab459d5992d7a ]
Currently, after enqueue_burst, there may be ops_unused ops
left for next round enqueue. And in next round preparation,
only ops_needed ops will be added. But if in the final round
the left ops is less than ops_needed, there will be invalid
ops between the new needed ops and previous unused ops. The
previous unused ops should be moved front after the needed
ops.
In the commit[1], an resubmission fix was added to throughput
test, and the fix was missed for verify.
This commit adds the missed resubmission fix for verify.
[1]
commit 44e2980b70d1 ("app/crypto-perf: fix crypto operation resubmission")
Fixes: f8be1786b1b8 ("app/crypto-perf: introduce performance test application")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
app/test-crypto-perf/cperf_test_verify.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 2b0d3f142b..10172a53a0 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -275,7 +275,6 @@ cperf_verify_test_runner(void *test_ctx)
ops_needed, ctx->sess, ctx->options,
ctx->test_vector, iv_offset, &imix_idx, NULL);
-
/* Populate the mbuf with the test vector, for verification */
for (i = 0; i < ops_needed; i++)
cperf_mbuf_set(ops[i]->sym->m_src,
@@ -293,6 +292,17 @@ cperf_verify_test_runner(void *test_ctx)
}
#endif /* CPERF_LINEARIZATION_ENABLE */
+ /**
+ * When ops_needed is smaller than ops_enqd, the
+ * unused ops need to be moved to the front for
+ * next round use.
+ */
+ if (unlikely(ops_enqd > ops_needed)) {
+ size_t nb_b_to_mov = ops_unused * sizeof(struct rte_crypto_op *);
+
+ memmove(&ops[ops_needed], &ops[ops_enqd], nb_b_to_mov);
+ }
+
/* Enqueue burst of ops on crypto device */
ops_enqd = rte_cryptodev_enqueue_burst(ctx->dev_id, ctx->qp_id,
ops, burst_size);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.479948002 +0800
+++ 0010-app-crypto-perf-add-missing-op-resubmission.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From 74d7c028ecf478f18cf9623210bab459d5992d7a Mon Sep 17 00:00:00 2001
+From 14c38e2db1ea3f8d09969caec97facbb93600526 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 74d7c028ecf478f18cf9623210bab459d5992d7a ]
@@ -23 +25,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: fix typos in cryptodev overview' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (8 preceding siblings ...)
2024-04-13 12:48 ` patch 'app/crypto-perf: add missing op resubmission' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/qat: fix legacy flag' " Xueming Li
` (113 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Andrew Boyer; +Cc: Akhil Goyal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6cacd0e502e6228008f269d8204fa736c56313ca
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6cacd0e502e6228008f269d8204fa736c56313ca Mon Sep 17 00:00:00 2001
From: Andrew Boyer <andrew.boyer@amd.com>
Date: Thu, 22 Feb 2024 09:41:11 -0800
Subject: [PATCH] doc: fix typos in cryptodev overview
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 85256fea3859b57451657919486e4559b0f2677c ]
Very minor improvements.
Fixes: 2717246ecd7d ("cryptodev: replace mbuf scatter gather flag")
Signed-off-by: Andrew Boyer <andrew.boyer@amd.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/cryptodevs/overview.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/cryptodevs/overview.rst b/doc/guides/cryptodevs/overview.rst
index d754b0cfc6..b068d0d19c 100644
--- a/doc/guides/cryptodevs/overview.rst
+++ b/doc/guides/cryptodevs/overview.rst
@@ -20,17 +20,17 @@ Supported Feature Flags
- "OOP SGL In SGL Out" feature flag stands for
"Out-of-place Scatter-gather list Input, Scatter-gather list Output",
which means PMD supports different scatter-gather styled input and output buffers
- (i.e. both can consists of multiple segments).
+ (i.e. both can consist of multiple segments).
- "OOP SGL In LB Out" feature flag stands for
"Out-of-place Scatter-gather list Input, Linear Buffers Output",
- which means PMD supports input from scatter-gathered styled buffers,
+ which means PMD supports input from scatter-gather styled buffers,
outputting linear buffers (i.e. single segment).
- "OOP LB In SGL Out" feature flag stands for
"Out-of-place Linear Buffers Input, Scatter-gather list Output",
which means PMD supports input from linear buffer, outputting
- scatter-gathered styled buffers.
+ scatter-gather styled buffers.
- "OOP LB In LB Out" feature flag stands for
"Out-of-place Linear Buffers Input, Linear Buffers Output",
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.503547371 +0800
+++ 0011-doc-fix-typos-in-cryptodev-overview.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From 85256fea3859b57451657919486e4559b0f2677c Mon Sep 17 00:00:00 2001
+From 6cacd0e502e6228008f269d8204fa736c56313ca Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 85256fea3859b57451657919486e4559b0f2677c ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/qat: fix legacy flag' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (9 preceding siblings ...)
2024-04-13 12:48 ` patch 'doc: fix typos in cryptodev overview' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'vhost: fix VDUSE device destruction failure' " Xueming Li
` (112 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Arkadiusz Kusztal; +Cc: Brian Dooley, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=af414b892d61870494e57aba0b9147ecec03c3d1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From af414b892d61870494e57aba0b9147ecec03c3d1 Mon Sep 17 00:00:00 2001
From: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Date: Fri, 1 Mar 2024 15:52:51 +0000
Subject: [PATCH] common/qat: fix legacy flag
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8191276fc5c10d55df72fd3bb80f97409b620318 ]
This commit fixes a legacy flag, which was placed in a file
that may not be included in a building process.
Fixes: cffb726b7797 ("crypto/qat: enable insecure algorithms")
Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Brian Dooley <brian.dooley@intel.com>
---
drivers/common/qat/qat_device.c | 1 +
drivers/crypto/qat/qat_sym.c | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index f55dc3c6f0..eceb5c89c4 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -29,6 +29,7 @@ struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
/* per-process array of device data */
struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
static int qat_nb_pci_devices;
+int qat_legacy_capa;
/*
* The set of PCI devices this driver supports
diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c
index 6e03bde841..4fa539769b 100644
--- a/drivers/crypto/qat/qat_sym.c
+++ b/drivers/crypto/qat/qat_sym.c
@@ -18,7 +18,6 @@
#include "qat_qp.h"
uint8_t qat_sym_driver_id;
-int qat_legacy_capa;
struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.526426641 +0800
+++ 0012-common-qat-fix-legacy-flag.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From 8191276fc5c10d55df72fd3bb80f97409b620318 Mon Sep 17 00:00:00 2001
+From af414b892d61870494e57aba0b9147ecec03c3d1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8191276fc5c10d55df72fd3bb80f97409b620318 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index a27252ea4d..500ca0f308 100644
+index f55dc3c6f0..eceb5c89c4 100644
@@ -23 +25 @@
-@@ -31,6 +31,7 @@ struct qat_service qat_service[QAT_MAX_SERVICES];
+@@ -29,6 +29,7 @@ struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
@@ -32 +34 @@
-index efca8f3ba1..6c7b1724ef 100644
+index 6e03bde841..4fa539769b 100644
@@ -35 +37 @@
-@@ -19,7 +19,6 @@
+@@ -18,7 +18,6 @@
@@ -41,2 +43,2 @@
- #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold"
- #define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable"
+ struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
+
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'vhost: fix VDUSE device destruction failure' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (10 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/qat: fix legacy flag' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/af_xdp: fix leak on XSK configuration " Xueming Li
` (111 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b2dba501cf2d141ae2cb01faf58077b62ffb57d2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b2dba501cf2d141ae2cb01faf58077b62ffb57d2 Mon Sep 17 00:00:00 2001
From: Maxime Coquelin <maxime.coquelin@redhat.com>
Date: Thu, 29 Feb 2024 13:24:56 +0100
Subject: [PATCH] vhost: fix VDUSE device destruction failure
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit deedfb86a7a6e10064d3cccd593f62072de96e36 ]
VDUSE_DESTROY_DEVICE ioctl can fail because the device's
chardev is not released despite close syscall having been
called. It happens because the events handler thread is
still polling the file descriptor.
fdset_pipe_notify() is not enough because it does not
ensure the notification has been handled by the event
thread, it just returns once the notification is sent.
To fix this, this patch introduces a synchronization
mechanism based on pthread's condition, so that
fdset_pipe_notify_sync() only returns once the pipe's
read callback has been executed.
Fixes: 51d018fdac4e ("vhost: add VDUSE events handler")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/fd_man.c | 23 +++++++++++++++++++++--
lib/vhost/fd_man.h | 6 ++++++
lib/vhost/socket.c | 1 +
lib/vhost/vduse.c | 3 ++-
4 files changed, 30 insertions(+), 3 deletions(-)
diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c
index 134414fb4b..84c5da0793 100644
--- a/lib/vhost/fd_man.c
+++ b/lib/vhost/fd_man.c
@@ -307,10 +307,11 @@ fdset_event_dispatch(void *arg)
}
static void
-fdset_pipe_read_cb(int readfd, void *dat __rte_unused,
+fdset_pipe_read_cb(int readfd, void *dat,
int *remove __rte_unused)
{
char charbuf[16];
+ struct fdset *fdset = dat;
int r = read(readfd, charbuf, sizeof(charbuf));
/*
* Just an optimization, we don't care if read() failed
@@ -318,6 +319,11 @@ fdset_pipe_read_cb(int readfd, void *dat __rte_unused,
* compiler happy
*/
RTE_SET_USED(r);
+
+ pthread_mutex_lock(&fdset->sync_mutex);
+ fdset->sync = true;
+ pthread_cond_broadcast(&fdset->sync_cond);
+ pthread_mutex_unlock(&fdset->sync_mutex);
}
void
@@ -340,7 +346,7 @@ fdset_pipe_init(struct fdset *fdset)
}
ret = fdset_add(fdset, fdset->u.readfd,
- fdset_pipe_read_cb, NULL, NULL);
+ fdset_pipe_read_cb, NULL, fdset);
if (ret < 0) {
RTE_LOG(ERR, VHOST_FDMAN,
@@ -364,5 +370,18 @@ fdset_pipe_notify(struct fdset *fdset)
* compiler happy
*/
RTE_SET_USED(r);
+}
+
+void
+fdset_pipe_notify_sync(struct fdset *fdset)
+{
+ pthread_mutex_lock(&fdset->sync_mutex);
+
+ fdset->sync = false;
+ fdset_pipe_notify(fdset);
+
+ while (!fdset->sync)
+ pthread_cond_wait(&fdset->sync_cond, &fdset->sync_mutex);
+ pthread_mutex_unlock(&fdset->sync_mutex);
}
diff --git a/lib/vhost/fd_man.h b/lib/vhost/fd_man.h
index 6315904c8e..7816fb11ac 100644
--- a/lib/vhost/fd_man.h
+++ b/lib/vhost/fd_man.h
@@ -6,6 +6,7 @@
#define _FD_MAN_H_
#include <pthread.h>
#include <poll.h>
+#include <stdbool.h>
#define MAX_FDS 1024
@@ -35,6 +36,10 @@ struct fdset {
int writefd;
};
} u;
+
+ pthread_mutex_t sync_mutex;
+ pthread_cond_t sync_cond;
+ bool sync;
};
@@ -53,5 +58,6 @@ int fdset_pipe_init(struct fdset *fdset);
void fdset_pipe_uninit(struct fdset *fdset);
void fdset_pipe_notify(struct fdset *fdset);
+void fdset_pipe_notify_sync(struct fdset *fdset);
#endif
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 5882e44176..0b95c54c5b 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -93,6 +93,7 @@ static struct vhost_user vhost_user = {
.fd = { [0 ... MAX_FDS - 1] = {-1, NULL, NULL, NULL, 0} },
.fd_mutex = PTHREAD_MUTEX_INITIALIZER,
.fd_pooling_mutex = PTHREAD_MUTEX_INITIALIZER,
+ .sync_mutex = PTHREAD_MUTEX_INITIALIZER,
.num = 0
},
.vsocket_cnt = 0,
diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c
index e198eeef64..b46f0e53c7 100644
--- a/lib/vhost/vduse.c
+++ b/lib/vhost/vduse.c
@@ -36,6 +36,7 @@ static struct vduse vduse = {
.fd = { [0 ... MAX_FDS - 1] = {-1, NULL, NULL, NULL, 0} },
.fd_mutex = PTHREAD_MUTEX_INITIALIZER,
.fd_pooling_mutex = PTHREAD_MUTEX_INITIALIZER,
+ .sync_mutex = PTHREAD_MUTEX_INITIALIZER,
.num = 0
},
};
@@ -618,7 +619,7 @@ vduse_device_destroy(const char *path)
vduse_device_stop(dev);
fdset_del(&vduse.fdset, dev->vduse_dev_fd);
- fdset_pipe_notify(&vduse.fdset);
+ fdset_pipe_notify_sync(&vduse.fdset);
if (dev->vduse_dev_fd >= 0) {
close(dev->vduse_dev_fd);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.548843712 +0800
+++ 0013-vhost-fix-VDUSE-device-destruction-failure.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From deedfb86a7a6e10064d3cccd593f62072de96e36 Mon Sep 17 00:00:00 2001
+From b2dba501cf2d141ae2cb01faf58077b62ffb57d2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit deedfb86a7a6e10064d3cccd593f62072de96e36 ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
@@ -34 +36 @@
-index 79a8d2c006..481e6b900a 100644
+index 134414fb4b..84c5da0793 100644
@@ -37 +39 @@
-@@ -309,10 +309,11 @@ fdset_event_dispatch(void *arg)
+@@ -307,10 +307,11 @@ fdset_event_dispatch(void *arg)
@@ -50 +52 @@
-@@ -320,6 +321,11 @@ fdset_pipe_read_cb(int readfd, void *dat __rte_unused,
+@@ -318,6 +319,11 @@ fdset_pipe_read_cb(int readfd, void *dat __rte_unused,
@@ -62 +64 @@
-@@ -342,7 +348,7 @@ fdset_pipe_init(struct fdset *fdset)
+@@ -340,7 +346,7 @@ fdset_pipe_init(struct fdset *fdset)
@@ -70,2 +72,2 @@
- VHOST_FDMAN_LOG(ERR,
-@@ -366,5 +372,18 @@ fdset_pipe_notify(struct fdset *fdset)
+ RTE_LOG(ERR, VHOST_FDMAN,
+@@ -364,5 +370,18 @@ fdset_pipe_notify(struct fdset *fdset)
@@ -121 +123 @@
-index a2fdac30a4..96b3ab5595 100644
+index 5882e44176..0b95c54c5b 100644
@@ -133 +135 @@
-index d462428d2c..e0c6991b69 100644
+index e198eeef64..b46f0e53c7 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/af_xdp: fix leak on XSK configuration failure' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (11 preceding siblings ...)
2024-04-13 12:48 ` patch 'vhost: fix VDUSE device destruction failure' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'app/testpmd: fix flow modify tag typo' " Xueming Li
` (110 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Yunjian Wang; +Cc: Ciara Loftus, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c2d52df5993845a8baac976c631d4794458313ba
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c2d52df5993845a8baac976c631d4794458313ba Mon Sep 17 00:00:00 2001
From: Yunjian Wang <wangyunjian@huawei.com>
Date: Fri, 23 Feb 2024 09:45:45 +0800
Subject: [PATCH] net/af_xdp: fix leak on XSK configuration failure
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 955acb9594cec548ae57319bfc99d4297d773675 ]
In xdp_umem_configure() allocated some resources for the
xsk umem, we should delete them when xsk configure fails,
otherwise it will lead to resources leak.
Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD")
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Ciara Loftus <ciara.loftus@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 2d151e45c7..268a130c49 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -960,6 +960,9 @@ remove_xdp_program(struct pmd_internals *internals)
static void
xdp_umem_destroy(struct xsk_umem_info *umem)
{
+ (void)xsk_umem__delete(umem->umem);
+ umem->umem = NULL;
+
#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
umem->mb_pool = NULL;
#else
@@ -992,11 +995,8 @@ eth_dev_close(struct rte_eth_dev *dev)
break;
xsk_socket__delete(rxq->xsk);
- if (__atomic_fetch_sub(&rxq->umem->refcnt, 1, __ATOMIC_ACQUIRE) - 1
- == 0) {
- (void)xsk_umem__delete(rxq->umem->umem);
+ if (__atomic_fetch_sub(&rxq->umem->refcnt, 1, __ATOMIC_ACQUIRE) - 1 == 0)
xdp_umem_destroy(rxq->umem);
- }
/* free pkt_tx_queue */
rte_free(rxq->pair);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.577879974 +0800
+++ 0014-net-af_xdp-fix-leak-on-XSK-configuration-failure.patch 2024-04-13 20:43:04.907754049 +0800
@@ -1 +1 @@
-From 955acb9594cec548ae57319bfc99d4297d773675 Mon Sep 17 00:00:00 2001
+From c2d52df5993845a8baac976c631d4794458313ba Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 955acb9594cec548ae57319bfc99d4297d773675 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix flow modify tag typo' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (12 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/af_xdp: fix leak on XSK configuration " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix modify flex item' " Xueming Li
` (109 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rongwei Liu; +Cc: Dariusz Sosnowski, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4d1331e9725ad7947d98287f9cecce516c73c311
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4d1331e9725ad7947d98287f9cecce516c73c311 Mon Sep 17 00:00:00 2001
From: Rongwei Liu <rongweil@nvidia.com>
Date: Fri, 23 Feb 2024 05:21:54 +0200
Subject: [PATCH] app/testpmd: fix flow modify tag typo
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c68992beeeef90be917b88e013cb9dcbe2ee221e ]
Update the name to the right one: "src_tag_index"
Fixes: c23626f27b09 ("ethdev: add MPLS header modification")
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline_flow.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index ce71818705..5647add29a 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -6905,7 +6905,7 @@ static const struct token token_list[] = {
.comp = comp_none,
},
[ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
- .name = "stc_tag_index",
+ .name = "src_tag_index",
.help = "source field tag array",
.next = NEXT(action_modify_field_src,
NEXT_ENTRY(COMMON_UNSIGNED)),
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.608018034 +0800
+++ 0015-app-testpmd-fix-flow-modify-tag-typo.patch 2024-04-13 20:43:04.917754036 +0800
@@ -1 +1 @@
-From c68992beeeef90be917b88e013cb9dcbe2ee221e Mon Sep 17 00:00:00 2001
+From 4d1331e9725ad7947d98287f9cecce516c73c311 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c68992beeeef90be917b88e013cb9dcbe2ee221e ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index b4389e5150..f69516faf2 100644
+index ce71818705..5647add29a 100644
@@ -22 +24 @@
-@@ -7316,7 +7316,7 @@ static const struct token token_list[] = {
+@@ -6905,7 +6905,7 @@ static const struct token token_list[] = {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix modify flex item' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (13 preceding siblings ...)
2024-04-13 12:48 ` patch 'app/testpmd: fix flow modify tag typo' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'app/testpmd: return if no packets in GRO heavy weight mode' " Xueming Li
` (108 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rongwei Liu; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=61ce57b13af50c7ccb2dd937fc1d3e82e45d5bcf
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 61ce57b13af50c7ccb2dd937fc1d3e82e45d5bcf Mon Sep 17 00:00:00 2001
From: Rongwei Liu <rongweil@nvidia.com>
Date: Fri, 23 Feb 2024 05:21:55 +0200
Subject: [PATCH] net/mlx5: fix modify flex item
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 74f98c1586e73663b4ed69b7dbd3d085776d93df ]
In the rte_flow_field_data structure, the flex item handle is part
of union with other members like level/tag_index.
If the user wants to modify the flex item as source or destination,
there should not be any checking against zero.
Fixes: c23626f27b09 ("ethdev: add MPLS header modification")
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 2 +-
drivers/net/mlx5/mlx5_flow_hw.c | 40 ++++++++++++++++++---------------
2 files changed, 23 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 85e8c77c81..3e31945f99 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2504,7 +2504,7 @@ int
flow_validate_modify_field_level(const struct rte_flow_action_modify_data *data,
struct rte_flow_error *error)
{
- if (data->level == 0)
+ if (data->level == 0 || data->field == RTE_FLOW_FIELD_FLEX_ITEM)
return 0;
if (data->field != RTE_FLOW_FIELD_TAG &&
data->field != (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index da873ae2e2..bda1ecf121 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4975,15 +4975,17 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
ret = flow_validate_modify_field_level(&action_conf->dst, error);
if (ret)
return ret;
- if (action_conf->dst.tag_index &&
- !flow_modify_field_support_tag_array(action_conf->dst.field))
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "destination tag index is not supported");
- if (action_conf->dst.class_id)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "destination class id is not supported");
+ if (action_conf->dst.field != RTE_FLOW_FIELD_FLEX_ITEM) {
+ if (action_conf->dst.tag_index &&
+ !flow_modify_field_support_tag_array(action_conf->dst.field))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "destination tag index is not supported");
+ if (action_conf->dst.class_id)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "destination class id is not supported");
+ }
if (mask_conf->dst.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
@@ -4998,15 +5000,17 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
"destination field mask and template are not equal");
if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
- if (action_conf->src.tag_index &&
- !flow_modify_field_support_tag_array(action_conf->src.field))
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "source tag index is not supported");
- if (action_conf->src.class_id)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, action,
- "source class id is not supported");
+ if (action_conf->src.field != RTE_FLOW_FIELD_FLEX_ITEM) {
+ if (action_conf->src.tag_index &&
+ !flow_modify_field_support_tag_array(action_conf->src.field))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "source tag index is not supported");
+ if (action_conf->src.class_id)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "source class id is not supported");
+ }
if (mask_conf->src.level != UINT8_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.641364891 +0800
+++ 0016-net-mlx5-fix-modify-flex-item.patch 2024-04-13 20:43:04.927754023 +0800
@@ -1 +1 @@
-From 74f98c1586e73663b4ed69b7dbd3d085776d93df Mon Sep 17 00:00:00 2001
+From 61ce57b13af50c7ccb2dd937fc1d3e82e45d5bcf Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 74f98c1586e73663b4ed69b7dbd3d085776d93df ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -18,3 +20,3 @@
- drivers/net/mlx5/mlx5_flow.c | 2 +-
- drivers/net/mlx5/mlx5_flow_hw.c | 6 ++++--
- 2 files changed, 5 insertions(+), 3 deletions(-)
+ drivers/net/mlx5/mlx5_flow.c | 2 +-
+ drivers/net/mlx5/mlx5_flow_hw.c | 40 ++++++++++++++++++---------------
+ 2 files changed, 23 insertions(+), 19 deletions(-)
@@ -23 +25 @@
-index 2b2ae62618..d8ed1ed6f6 100644
+index 85e8c77c81..3e31945f99 100644
@@ -26,2 +28,2 @@
-@@ -2384,7 +2384,7 @@ int
- flow_validate_modify_field_level(const struct rte_flow_field_data *data,
+@@ -2504,7 +2504,7 @@ int
+ flow_validate_modify_field_level(const struct rte_flow_action_modify_data *data,
@@ -36 +38 @@
-index bcf43f5457..5b269b9c82 100644
+index da873ae2e2..bda1ecf121 100644
@@ -39 +41 @@
-@@ -5036,7 +5036,8 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
+@@ -4975,15 +4975,17 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
@@ -43,7 +45,24 @@
-- if (!flow_hw_modify_field_is_geneve_opt(action_conf->dst.field)) {
-+ if (action_conf->dst.field != RTE_FLOW_FIELD_FLEX_ITEM &&
-+ !flow_hw_modify_field_is_geneve_opt(action_conf->dst.field)) {
- if (action_conf->dst.tag_index &&
- !flow_modify_field_support_tag_array(action_conf->dst.field))
- return rte_flow_error_set(error, EINVAL,
-@@ -5061,7 +5062,8 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
+- if (action_conf->dst.tag_index &&
+- !flow_modify_field_support_tag_array(action_conf->dst.field))
+- return rte_flow_error_set(error, EINVAL,
+- RTE_FLOW_ERROR_TYPE_ACTION, action,
+- "destination tag index is not supported");
+- if (action_conf->dst.class_id)
+- return rte_flow_error_set(error, EINVAL,
+- RTE_FLOW_ERROR_TYPE_ACTION, action,
+- "destination class id is not supported");
++ if (action_conf->dst.field != RTE_FLOW_FIELD_FLEX_ITEM) {
++ if (action_conf->dst.tag_index &&
++ !flow_modify_field_support_tag_array(action_conf->dst.field))
++ return rte_flow_error_set(error, EINVAL,
++ RTE_FLOW_ERROR_TYPE_ACTION, action,
++ "destination tag index is not supported");
++ if (action_conf->dst.class_id)
++ return rte_flow_error_set(error, EINVAL,
++ RTE_FLOW_ERROR_TYPE_ACTION, action,
++ "destination class id is not supported");
++ }
+ if (mask_conf->dst.level != UINT8_MAX)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+@@ -4998,15 +5000,17 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
@@ -53,6 +72,23 @@
-- if (!flow_hw_modify_field_is_geneve_opt(action_conf->src.field)) {
-+ if (action_conf->src.field != RTE_FLOW_FIELD_FLEX_ITEM &&
-+ !flow_hw_modify_field_is_geneve_opt(action_conf->src.field)) {
- if (action_conf->src.tag_index &&
- !flow_modify_field_support_tag_array(action_conf->src.field))
- return rte_flow_error_set(error, EINVAL,
+- if (action_conf->src.tag_index &&
+- !flow_modify_field_support_tag_array(action_conf->src.field))
+- return rte_flow_error_set(error, EINVAL,
+- RTE_FLOW_ERROR_TYPE_ACTION, action,
+- "source tag index is not supported");
+- if (action_conf->src.class_id)
+- return rte_flow_error_set(error, EINVAL,
+- RTE_FLOW_ERROR_TYPE_ACTION, action,
+- "source class id is not supported");
++ if (action_conf->src.field != RTE_FLOW_FIELD_FLEX_ITEM) {
++ if (action_conf->src.tag_index &&
++ !flow_modify_field_support_tag_array(action_conf->src.field))
++ return rte_flow_error_set(error, EINVAL,
++ RTE_FLOW_ERROR_TYPE_ACTION, action,
++ "source tag index is not supported");
++ if (action_conf->src.class_id)
++ return rte_flow_error_set(error, EINVAL,
++ RTE_FLOW_ERROR_TYPE_ACTION, action,
++ "source class id is not supported");
++ }
+ if (mask_conf->src.level != UINT8_MAX)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: return if no packets in GRO heavy weight mode' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (14 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix modify flex item' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'app/testpmd: fix async flow create failure handling' " Xueming Li
` (107 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Kumara Parameshwaran; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=92ab2d6da25f20572d4fd77e5ff792eae6e6504d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 92ab2d6da25f20572d4fd77e5ff792eae6e6504d Mon Sep 17 00:00:00 2001
From: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Date: Sun, 25 Feb 2024 11:46:36 +0530
Subject: [PATCH] app/testpmd: return if no packets in GRO heavy weight mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0007e4045c9efe6a20ad2590dfa68a86cc778b48 ]
If there are no packets flushed in GRO heavy weight mode,
return false as this fall through code would return true
indicating that packets are available
Fixes: 461c287ab553 ("app/testpmd: fix GRO packets flush on timeout")
Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/csumonly.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index d73f08b2c6..6711dda42e 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -1142,6 +1142,8 @@ tunnel_update:
gro_pkts_num);
fs->gro_times = 0;
}
+ if (nb_rx == 0)
+ return false;
}
pkts_ip_csum_recalc(pkts_burst, nb_rx, tx_offloads);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.694181222 +0800
+++ 0017-app-testpmd-return-if-no-packets-in-GRO-heavy-weight.patch 2024-04-13 20:43:04.927754023 +0800
@@ -1 +1 @@
-From 0007e4045c9efe6a20ad2590dfa68a86cc778b48 Mon Sep 17 00:00:00 2001
+From 92ab2d6da25f20572d4fd77e5ff792eae6e6504d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0007e4045c9efe6a20ad2590dfa68a86cc778b48 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix async flow create failure handling' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (15 preceding siblings ...)
2024-04-13 12:48 ` patch 'app/testpmd: return if no packets in GRO heavy weight mode' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/tap: do not overwrite flow API errors' " Xueming Li
` (106 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4a1ffc9b02bcda814389a53b914cdad06e03f872
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4a1ffc9b02bcda814389a53b914cdad06e03f872 Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Wed, 28 Feb 2024 19:57:07 +0100
Subject: [PATCH] app/testpmd: fix async flow create failure handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0da12ecba770873851a3a63dc08052271a350aeb ]
In case of an error when an asynchronous flow create operation was
enqueued, test-pmd attempted to enqueue a flow destroy operation
of that flow rule.
However, this was incorrect because:
- Flow rule index was used to enqueue a flow destroy operation.
This flow rule index was not yet initialized, so flow rule number 0
was always destroyed as a result.
- Since rte_flow_async_create() does not return a handle on error,
then there is no flow rule to destroy.
test-pmd only needs to free internal memory allocated for
storing a flow rule. Any flow destroy operation is not needed
in this case.
Fixes: ecdc927b99f2 ("app/testpmd: add async flow create/destroy operations")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
app/test-pmd/config.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2c4dedd603..7c24e401ec 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2789,8 +2789,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
flow = rte_flow_async_create_by_index(port_id, queue_id, &op_attr, pt->table,
rule_idx, actions, actions_idx, job, &error);
if (!flow) {
- uint64_t flow_id = pf->id;
- port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+ free(pf);
free(job);
return port_flow_complain(&error);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.724984282 +0800
+++ 0018-app-testpmd-fix-async-flow-create-failure-handling.patch 2024-04-13 20:43:04.927754023 +0800
@@ -1 +1 @@
-From 0da12ecba770873851a3a63dc08052271a350aeb Mon Sep 17 00:00:00 2001
+From 4a1ffc9b02bcda814389a53b914cdad06e03f872 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0da12ecba770873851a3a63dc08052271a350aeb ]
@@ -22 +24,0 @@
-Cc: stable@dpdk.org
@@ -31 +33 @@
-index cd2a436cd7..968d2164ab 100644
+index 2c4dedd603..7c24e401ec 100644
@@ -34 +36 @@
-@@ -2856,8 +2856,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+@@ -2789,8 +2789,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/tap: do not overwrite flow API errors' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (16 preceding siblings ...)
2024-04-13 12:48 ` patch 'app/testpmd: fix async flow create failure handling' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/tap: fix traffic control handle calculation' " Xueming Li
` (105 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e9462a5690b6e3d09930960bfd1ea6906d848263
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e9462a5690b6e3d09930960bfd1ea6906d848263 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 29 Feb 2024 09:31:07 -0800
Subject: [PATCH] net/tap: do not overwrite flow API errors
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 11b90b53c6716ca9bc713bab6cfba039fe8e38cb ]
All flow errors were ending up being reported as not supported,
even when the error path was previously setting a valid and
better error message.
Example, asking for a non-existent queue in flow.
Before:
testpmd> flow create 0 ingress pattern eth src is 06:05:04:03:02:01 \
/ end actions queue index 12 / end
port_flow_complain(): Caught PMD error type 16 (specific action):
cause: 0x7fffc46c1e18, action not supported: Operation not supported
After:
testpmd> flow create 0 ingress pattern eth src is 06:05:04:03:02:01 \
/ end actions queue index 12 / end
port_flow_complain(): Caught PMD error type 16 (specific action):
cause: 0x7fffa54e1d88, queue index out of range: Numerical result
out of range
Fixes: f46900d03823 ("net/tap: fix flow and port commands")
Fixes: de96fe68ae95 ("net/tap: add basic flow API patterns and actions")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/tap/tap_flow.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index ed4d42f92f..5b0fee9064 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -1082,8 +1082,11 @@ priv_flow_process(struct pmd_internals *pmd,
}
/* use flower filter type */
tap_nlattr_add(&flow->msg.nh, TCA_KIND, sizeof("flower"), "flower");
- if (tap_nlattr_nested_start(&flow->msg, TCA_OPTIONS) < 0)
- goto exit_item_not_supported;
+ if (tap_nlattr_nested_start(&flow->msg, TCA_OPTIONS) < 0) {
+ rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION,
+ actions, "could not allocated netlink msg");
+ goto exit_return_error;
+ }
}
for (; items->type != RTE_FLOW_ITEM_TYPE_END; ++items) {
const struct tap_flow_items *token = NULL;
@@ -1199,9 +1202,12 @@ actions:
if (action)
goto exit_action_not_supported;
action = 1;
- if (!queue ||
- (queue->index > pmd->dev->data->nb_rx_queues - 1))
- goto exit_action_not_supported;
+ if (queue->index >= pmd->dev->data->nb_rx_queues) {
+ rte_flow_error_set(error, ERANGE,
+ RTE_FLOW_ERROR_TYPE_ACTION, actions,
+ "queue index out of range");
+ goto exit_return_error;
+ }
if (flow) {
struct action_data adata = {
.id = "skbedit",
@@ -1227,7 +1233,7 @@ actions:
if (!pmd->rss_enabled) {
err = rss_enable(pmd, attr, error);
if (err)
- goto exit_action_not_supported;
+ goto exit_return_error;
}
if (flow)
err = rss_add_actions(flow, pmd, rss, error);
@@ -1235,7 +1241,7 @@ actions:
goto exit_action_not_supported;
}
if (err)
- goto exit_action_not_supported;
+ goto exit_return_error;
}
/* When fate is unknown, drop traffic. */
if (!action) {
@@ -1258,6 +1264,7 @@ exit_item_not_supported:
exit_action_not_supported:
rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
actions, "action not supported");
+exit_return_error:
return -rte_errno;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.752023346 +0800
+++ 0019-net-tap-do-not-overwrite-flow-API-errors.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 11b90b53c6716ca9bc713bab6cfba039fe8e38cb Mon Sep 17 00:00:00 2001
+From e9462a5690b6e3d09930960bfd1ea6906d848263 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 11b90b53c6716ca9bc713bab6cfba039fe8e38cb ]
@@ -27 +29,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/tap: fix traffic control handle calculation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (17 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/tap: do not overwrite flow API errors' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/bnxt: fix null pointer dereference' " Xueming Li
` (104 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1d5bfd9fdf17cb809a9e32da71d2abc92f2bf871
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1d5bfd9fdf17cb809a9e32da71d2abc92f2bf871 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 29 Feb 2024 09:31:08 -0800
Subject: [PATCH] net/tap: fix traffic control handle calculation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4e924ff6f789c6a67424263bf384f3e4b4fba373 ]
The code to take a flow pointer and make a TC handle was incorrect
and would always generate the same handle. This is because it was
hashing the address of the union on the stack (which is invariant)
rather than the contents of the union.
The following testpmd case would cause an error:
testpmd> flow create 0 ingress pattern eth src is 06:05:04:03:02:01 \
/ end actions queue index 2 / end
Flow rule #0 created
testpmd> flow create 0 ingress pattern eth src is 06:05:04:03:02:02 \
/ end actions queue index 3 / end
tap_nl_dump_ext_ack(): Filter already exists
tap_flow_create(): Kernel refused TC filter rule creation (17): File exists
port_flow_complain(): Caught PMD error type 2 (flow rule (handle)):
overlapping rules or Kernel too old for flower support: File exists
This fix does it in a more robust manner using size independent
code. It also initializes the hash seed so the same hash won't
show up every time and risk potential leakage of address to
other places.
Bugzilla ID: 1382
Fixes: de96fe68ae95 ("net/tap: add basic flow API patterns and actions")
Fixes: a625ab89df11 ("net/tap: fix build with GCC 11")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/tap/tap_flow.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c
index 5b0fee9064..fa50fe45d7 100644
--- a/drivers/net/tap/tap_flow.c
+++ b/drivers/net/tap/tap_flow.c
@@ -11,6 +11,7 @@
#include <rte_byteorder.h>
#include <rte_jhash.h>
+#include <rte_random.h>
#include <rte_malloc.h>
#include <rte_eth_tap.h>
#include <tap_flow.h>
@@ -1297,9 +1298,7 @@ tap_flow_validate(struct rte_eth_dev *dev,
* In those rules, the handle (uint32_t) is the part that would identify
* specifically each rule.
*
- * On 32-bit architectures, the handle can simply be the flow's pointer address.
- * On 64-bit architectures, we rely on jhash(flow) to find a (sufficiently)
- * unique handle.
+ * Use jhash of the flow pointer to make a unique handle.
*
* @param[in, out] flow
* The flow that needs its handle set.
@@ -1309,16 +1308,18 @@ tap_flow_set_handle(struct rte_flow *flow)
{
union {
struct rte_flow *flow;
- const void *key;
- } tmp;
- uint32_t handle = 0;
+ uint32_t words[sizeof(flow) / sizeof(uint32_t)];
+ } tmp = {
+ .flow = flow,
+ };
+ uint32_t handle;
+ static uint64_t hash_seed;
- tmp.flow = flow;
+ if (hash_seed == 0)
+ hash_seed = rte_rand();
+
+ handle = rte_jhash_32b(tmp.words, sizeof(flow) / sizeof(uint32_t), hash_seed);
- if (sizeof(flow) > 4)
- handle = rte_jhash(tmp.key, sizeof(flow), 1);
- else
- handle = (uintptr_t)flow;
/* must be at least 1 to avoid letting the kernel choose one for us */
if (!handle)
handle = 1;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.777702013 +0800
+++ 0020-net-tap-fix-traffic-control-handle-calculation.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 4e924ff6f789c6a67424263bf384f3e4b4fba373 Mon Sep 17 00:00:00 2001
+From 1d5bfd9fdf17cb809a9e32da71d2abc92f2bf871 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4e924ff6f789c6a67424263bf384f3e4b4fba373 ]
@@ -30 +32,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/bnxt: fix null pointer dereference' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (18 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/tap: fix traffic control handle calculation' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/ixgbevf: fix RSS init for x550 NICs' " Xueming Li
` (103 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Kalesh AP; +Cc: Ajit Khaparde, Somnath Kotur, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a71de447a2351b7c5b4f95446b5382e253415232
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a71de447a2351b7c5b4f95446b5382e253415232 Mon Sep 17 00:00:00 2001
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Date: Wed, 7 Feb 2024 01:19:02 -0800
Subject: [PATCH] net/bnxt: fix null pointer dereference
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 68eeafdef4db7362ff5307995b670a98f65f2493 ]
In the recent changes to rte_eth_dev_release_port() the library sets
eth_dev->data to NULL at the end of the routine. This causes a NULL
pointer dereference in the bnxt_rep_dev_info_get_op() and
bnxt_representor_uninit() routines when it tries to validate parent dev.
Add code to handle this.
Fixes: 6dc83230b43b ("net/bnxt: support port representor data path")
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_reps.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index a15f993328..a7b75b543e 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -32,6 +32,14 @@ static const struct eth_dev_ops bnxt_rep_dev_ops = {
.flow_ops_get = bnxt_flow_ops_get_op
};
+static bool bnxt_rep_check_parent(struct bnxt_representor *rep)
+{
+ if (!rep->parent_dev->data->dev_private)
+ return false;
+
+ return true;
+}
+
uint16_t
bnxt_vfr_recv(uint16_t port_id, uint16_t queue_id, struct rte_mbuf *mbuf)
{
@@ -266,12 +274,12 @@ int bnxt_representor_uninit(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(DEBUG, "BNXT Port:%d VFR uninit\n", eth_dev->data->port_id);
eth_dev->data->mac_addrs = NULL;
- parent_bp = rep->parent_dev->data->dev_private;
- if (!parent_bp) {
+ if (!bnxt_rep_check_parent(rep)) {
PMD_DRV_LOG(DEBUG, "BNXT Port:%d already freed\n",
eth_dev->data->port_id);
return 0;
}
+ parent_bp = rep->parent_dev->data->dev_private;
parent_bp->num_reps--;
vf_id = rep->vf_id;
@@ -539,11 +547,12 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
int rc = 0;
/* MAC Specifics */
- parent_bp = rep_bp->parent_dev->data->dev_private;
- if (!parent_bp) {
- PMD_DRV_LOG(ERR, "Rep parent NULL!\n");
+ if (!bnxt_rep_check_parent(rep_bp)) {
+ /* Need not be an error scenario, if parent is closed first */
+ PMD_DRV_LOG(INFO, "Rep parent port does not exist.\n");
return rc;
}
+ parent_bp = rep_bp->parent_dev->data->dev_private;
PMD_DRV_LOG(DEBUG, "Representor dev_info_get_op\n");
dev_info->max_mac_addrs = parent_bp->max_l2_ctx;
dev_info->max_hash_mac_addrs = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.800972282 +0800
+++ 0021-net-bnxt-fix-null-pointer-dereference.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 68eeafdef4db7362ff5307995b670a98f65f2493 Mon Sep 17 00:00:00 2001
+From a71de447a2351b7c5b4f95446b5382e253415232 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 68eeafdef4db7362ff5307995b670a98f65f2493 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 3a4720bc3c..edcc27f556 100644
+index a15f993328..a7b75b543e 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ixgbevf: fix RSS init for x550 NICs' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (19 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/bnxt: fix null pointer dereference' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/iavf: remove error logs for VLAN offloading' " Xueming Li
` (102 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Edwin Brossette; +Cc: Vladimir Medvedkin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2aa5a757508a5861f9ddfc07cace84b7a51697f6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2aa5a757508a5861f9ddfc07cace84b7a51697f6 Mon Sep 17 00:00:00 2001
From: Edwin Brossette <edwin.brossette@6wind.com>
Date: Thu, 15 Feb 2024 14:31:45 +0100
Subject: [PATCH] net/ixgbevf: fix RSS init for x550 NICs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3a53577d5f390e8635a672b79616e54c59b330ab ]
Different Intel NICs with the igxbe PMD do not handle RSS in the same
way when working with virtualization. While some NICs like Intel 82599ES
only have a single RSS table in the device and leave all RSS features to
be handled by the PF, some other NICs like x550 let the VF handle RSS
features. This can lead to different behavior when RSS is enabled
depending on the model of nic used.
In particular, ixgbevf_dev_rx_init() does not configure RSS parameters
at device init, even if the multi-queue mode option is set in the device
configuration (ie. RTE_ETH_MQ_RX_RSS is set). Note that this issue went
unnoticed until now, probably because some NICs do not really have
support for RSS in virtualization mode.
Thus, depending on the NIC used, we can we find ourselves in a situation
where RSS is not configured despite being enabled. This will cause
serious performance issues because the RSS RETA table will be fully
zeroed, causing all packets to go only to the first queue, leaving all
other queues empty.
By looking at ixgbe_reta_size_get(), we can see that only X550 NIC
models have a non zero RETA size set in VF mode. Therefore, add a call
to ixgbe_rss_configure() for these cards in ixgbevf_dev_rx_init() if the
option to enable RSS is set.
Fixes: f4d1598ee14f ("ixgbevf: support RSS config on x550")
Signed-off-by: Edwin Brossette <edwin.brossette@6wind.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/ixgbe/ixgbe_rxtx.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 90b0a7004f..f6c17d4efb 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5844,6 +5844,25 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_PSRTYPE_RQPL_SHIFT;
IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype);
+ /* Initialize the rss for x550_vf cards if enabled */
+ switch (hw->mac.type) {
+ case ixgbe_mac_X550_vf:
+ case ixgbe_mac_X550EM_x_vf:
+ case ixgbe_mac_X550EM_a_vf:
+ switch (dev->data->dev_conf.rxmode.mq_mode) {
+ case RTE_ETH_MQ_RX_RSS:
+ case RTE_ETH_MQ_RX_DCB_RSS:
+ case RTE_ETH_MQ_RX_VMDQ_RSS:
+ ixgbe_rss_configure(dev);
+ break;
+ default:
+ break;
+ }
+ break;
+ default:
+ break;
+ }
+
ixgbe_set_rx_function(dev);
return 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.824293352 +0800
+++ 0022-net-ixgbevf-fix-RSS-init-for-x550-NICs.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 3a53577d5f390e8635a672b79616e54c59b330ab Mon Sep 17 00:00:00 2001
+From 2aa5a757508a5861f9ddfc07cace84b7a51697f6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3a53577d5f390e8635a672b79616e54c59b330ab ]
@@ -31 +33,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/iavf: remove error logs for VLAN offloading' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (20 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/ixgbevf: fix RSS init for x550 NICs' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/ixgbe: increase VF reset timeout' " Xueming Li
` (101 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: David Marchand; +Cc: Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=eefc0111dec8c9b7a03b672e1d890d3a9229ced3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From eefc0111dec8c9b7a03b672e1d890d3a9229ced3 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Tue, 6 Feb 2024 11:34:20 +0100
Subject: [PATCH] net/iavf: remove error logs for VLAN offloading
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 325764b3a20a16a7a997a324cc0b93367eb7f3e1 ]
This was reported by RH QE.
When a vlan is enforced on a VF via an administrative configuration on
the PF side, the net/iavf driver logs two error messages.
Those error messages have no consequence on the rest of the port
initialisation and packet processing works fine.
[root@toto ~] # ip l set enp94s0 vf 0 vlan 2
[root@toto ~] # dpdk-testpmd -a 0000:5e:02.0 -- -i
...
Configuring Port 0 (socket 0)
iavf_dev_init_vlan(): Failed to update vlan offload
iavf_dev_configure(): configure VLAN failed: -95
iavf_set_rx_function(): request RXDID[1] in Queue[0] is legacy, set
rx_pkt_burst as legacy for all queues
The first change is to remove the error log in iavf_dev_init_vlan().
This log is unneeded since all error path are covered by dedicated log
messages already.
Then, in iavf_dev_init_vlan(), requesting all possible VLAN offloading
must not trigger an ERROR level log message. This is simply confusing,
as the application may not have requested such vlan offloading.
The reason why the driver requests all offloading is unclear so keep it
as is. Instead, rephrase the log message and lower its level to INFO.
Fixes: 1c301e8c3cff ("net/iavf: support new VLAN capabilities")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_ethdev.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 32a1626420..c26612c4a1 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -630,7 +630,8 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
RTE_ETH_VLAN_FILTER_MASK |
RTE_ETH_VLAN_EXTEND_MASK);
if (err) {
- PMD_DRV_LOG(ERR, "Failed to update vlan offload");
+ PMD_DRV_LOG(INFO,
+ "VLAN offloading is not supported, or offloading was refused by the PF");
return err;
}
@@ -706,9 +707,7 @@ iavf_dev_configure(struct rte_eth_dev *dev)
vf->max_rss_qregion = IAVF_MAX_NUM_QUEUES_DFLT;
}
- ret = iavf_dev_init_vlan(dev);
- if (ret)
- PMD_DRV_LOG(ERR, "configure VLAN failed: %d", ret);
+ iavf_dev_init_vlan(dev);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
if (iavf_init_rss(ad) != 0) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.849317819 +0800
+++ 0023-net-iavf-remove-error-logs-for-VLAN-offloading.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 325764b3a20a16a7a997a324cc0b93367eb7f3e1 Mon Sep 17 00:00:00 2001
+From eefc0111dec8c9b7a03b672e1d890d3a9229ced3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 325764b3a20a16a7a997a324cc0b93367eb7f3e1 ]
@@ -33 +35,0 @@
-Cc: stable@dpdk.org
@@ -42 +44 @@
-index b5f6049a91..2cb602a358 100644
+index 32a1626420..c26612c4a1 100644
@@ -45 +47 @@
-@@ -633,7 +633,8 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
+@@ -630,7 +630,8 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
@@ -55 +57 @@
-@@ -709,9 +710,7 @@ iavf_dev_configure(struct rte_eth_dev *dev)
+@@ -706,9 +707,7 @@ iavf_dev_configure(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ixgbe: increase VF reset timeout' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (21 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/iavf: remove error logs for VLAN offloading' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/i40e: remove incorrect 16B descriptor read block' " Xueming Li
` (100 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Kevin Traynor; +Cc: Vladimir Medvedkin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=33b5bed057b1c466dbfbd18f2b30deadee66715f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 33b5bed057b1c466dbfbd18f2b30deadee66715f Mon Sep 17 00:00:00 2001
From: Kevin Traynor <ktraynor@redhat.com>
Date: Tue, 30 Jan 2024 10:00:27 +0000
Subject: [PATCH] net/ixgbe: increase VF reset timeout
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 64e714f838aeb1afbd4e7544686a0d7cd8921589 ]
When VF issues a reset to PF there is a 50 msec wait plus an additional
max of 1 msec (200 * 5us) for the PF to indicate the reset is complete
before timeout.
In some cases, it is seen that the reset is timing out, in which case
the reset does not complete and an error is returned.
In order to account for this, continue to wait an initial 50 msecs, but
then allow a max of an additional 50 msecs (10,000 * 5us) for the
command to complete.
Fixes: af75078fece3 ("first public release")
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/ixgbe/base/ixgbe_type.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ixgbe/base/ixgbe_type.h b/drivers/net/ixgbe/base/ixgbe_type.h
index 1094df5891..35212a561b 100644
--- a/drivers/net/ixgbe/base/ixgbe_type.h
+++ b/drivers/net/ixgbe/base/ixgbe_type.h
@@ -1800,7 +1800,7 @@ enum {
/* VFRE bitmask */
#define IXGBE_VFRE_ENABLE_ALL 0xFFFFFFFF
-#define IXGBE_VF_INIT_TIMEOUT 200 /* Number of retries to clear RSTI */
+#define IXGBE_VF_INIT_TIMEOUT 10000 /* Number of retries to clear RSTI */
/* RDHMPN and TDHMPN bitmasks */
#define IXGBE_RDHMPN_RDICADDR 0x007FF800
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.876115184 +0800
+++ 0024-net-ixgbe-increase-VF-reset-timeout.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 64e714f838aeb1afbd4e7544686a0d7cd8921589 Mon Sep 17 00:00:00 2001
+From 33b5bed057b1c466dbfbd18f2b30deadee66715f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 64e714f838aeb1afbd4e7544686a0d7cd8921589 ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/i40e: remove incorrect 16B descriptor read block' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (22 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/ixgbe: increase VF reset timeout' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/iavf: " Xueming Li
` (99 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Anatoly Burakov, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=542c8410cbefeada90e6ba1028da1e92d279aba3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 542c8410cbefeada90e6ba1028da1e92d279aba3 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Tue, 23 Jan 2024 11:40:48 +0000
Subject: [PATCH] net/i40e: remove incorrect 16B descriptor read block
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b527d9585d9cd0446d6bfa39d3a8e896c87883e5 ]
By default, the driver works with 32B descriptors, but has a separate
descriptor read block for reading two descriptors at a time when using
16B descriptors. However, the 32B reads used are not guaranteed to be
atomic, which will cause issues if that is not the case on a system,
since the descriptors may be read in an undefined order. Remove the
block, to avoid issues, and just use the regular descriptor reading path
for 16B descriptors, if that support is enabled at build time.
Fixes: dafadd73762e ("net/i40e: add AVX2 Rx function")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/i40e/i40e_rxtx_vec_avx2.c | 64 ++++++++++-----------------
1 file changed, 24 insertions(+), 40 deletions(-)
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx2.c b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
index f468c1fd90..19cf0ac718 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx2.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx2.c
@@ -276,46 +276,30 @@ _recv_raw_pkts_vec_avx2(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts,
_mm256_loadu_si256((void *)&sw_ring[i + 4]));
#endif
- __m256i raw_desc0_1, raw_desc2_3, raw_desc4_5, raw_desc6_7;
-#ifdef RTE_LIBRTE_I40E_16BYTE_RX_DESC
- /* for AVX we need alignment otherwise loads are not atomic */
- if (avx_aligned) {
- /* load in descriptors, 2 at a time, in reverse order */
- raw_desc6_7 = _mm256_load_si256((void *)(rxdp + 6));
- rte_compiler_barrier();
- raw_desc4_5 = _mm256_load_si256((void *)(rxdp + 4));
- rte_compiler_barrier();
- raw_desc2_3 = _mm256_load_si256((void *)(rxdp + 2));
- rte_compiler_barrier();
- raw_desc0_1 = _mm256_load_si256((void *)(rxdp + 0));
- } else
-#endif
- do {
- const __m128i raw_desc7 = _mm_load_si128((void *)(rxdp + 7));
- rte_compiler_barrier();
- const __m128i raw_desc6 = _mm_load_si128((void *)(rxdp + 6));
- rte_compiler_barrier();
- const __m128i raw_desc5 = _mm_load_si128((void *)(rxdp + 5));
- rte_compiler_barrier();
- const __m128i raw_desc4 = _mm_load_si128((void *)(rxdp + 4));
- rte_compiler_barrier();
- const __m128i raw_desc3 = _mm_load_si128((void *)(rxdp + 3));
- rte_compiler_barrier();
- const __m128i raw_desc2 = _mm_load_si128((void *)(rxdp + 2));
- rte_compiler_barrier();
- const __m128i raw_desc1 = _mm_load_si128((void *)(rxdp + 1));
- rte_compiler_barrier();
- const __m128i raw_desc0 = _mm_load_si128((void *)(rxdp + 0));
-
- raw_desc6_7 = _mm256_inserti128_si256(
- _mm256_castsi128_si256(raw_desc6), raw_desc7, 1);
- raw_desc4_5 = _mm256_inserti128_si256(
- _mm256_castsi128_si256(raw_desc4), raw_desc5, 1);
- raw_desc2_3 = _mm256_inserti128_si256(
- _mm256_castsi128_si256(raw_desc2), raw_desc3, 1);
- raw_desc0_1 = _mm256_inserti128_si256(
- _mm256_castsi128_si256(raw_desc0), raw_desc1, 1);
- } while (0);
+ const __m128i raw_desc7 = _mm_load_si128((void *)(rxdp + 7));
+ rte_compiler_barrier();
+ const __m128i raw_desc6 = _mm_load_si128((void *)(rxdp + 6));
+ rte_compiler_barrier();
+ const __m128i raw_desc5 = _mm_load_si128((void *)(rxdp + 5));
+ rte_compiler_barrier();
+ const __m128i raw_desc4 = _mm_load_si128((void *)(rxdp + 4));
+ rte_compiler_barrier();
+ const __m128i raw_desc3 = _mm_load_si128((void *)(rxdp + 3));
+ rte_compiler_barrier();
+ const __m128i raw_desc2 = _mm_load_si128((void *)(rxdp + 2));
+ rte_compiler_barrier();
+ const __m128i raw_desc1 = _mm_load_si128((void *)(rxdp + 1));
+ rte_compiler_barrier();
+ const __m128i raw_desc0 = _mm_load_si128((void *)(rxdp + 0));
+
+ const __m256i raw_desc6_7 = _mm256_inserti128_si256(
+ _mm256_castsi128_si256(raw_desc6), raw_desc7, 1);
+ const __m256i raw_desc4_5 = _mm256_inserti128_si256(
+ _mm256_castsi128_si256(raw_desc4), raw_desc5, 1);
+ const __m256i raw_desc2_3 = _mm256_inserti128_si256(
+ _mm256_castsi128_si256(raw_desc2), raw_desc3, 1);
+ const __m256i raw_desc0_1 = _mm256_inserti128_si256(
+ _mm256_castsi128_si256(raw_desc0), raw_desc1, 1);
if (split_packet) {
int j;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.900966252 +0800
+++ 0025-net-i40e-remove-incorrect-16B-descriptor-read-block.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From b527d9585d9cd0446d6bfa39d3a8e896c87883e5 Mon Sep 17 00:00:00 2001
+From 542c8410cbefeada90e6ba1028da1e92d279aba3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b527d9585d9cd0446d6bfa39d3a8e896c87883e5 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/iavf: remove incorrect 16B descriptor read block' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (23 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/i40e: remove incorrect 16B descriptor read block' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/ice: " Xueming Li
` (98 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Anatoly Burakov, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=72093d3d41b3a9fcad9010accc7f55e79f205cc9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 72093d3d41b3a9fcad9010accc7f55e79f205cc9 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Tue, 23 Jan 2024 11:40:50 +0000
Subject: [PATCH] net/iavf: remove incorrect 16B descriptor read block
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d4ade5d02d188fcbe51871c5a5d66ef075ca0f86 ]
By default, the driver works with 32B descriptors, but has a separate
descriptor read block for reading two descriptors at a time when using
16B descriptors. However, the 32B reads used are not guaranteed to be
atomic, which will cause issues if that is not the case on a system,
since the descriptors may be read in an undefined order. Remove the
block, to avoid issues, and just use the regular descriptor reading path
for 16B descriptors, if that support is enabled at build time.
Fixes: af0c246a3800 ("net/iavf: enable AVX2 for iavf")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 80 ++++++++-------------------
1 file changed, 24 insertions(+), 56 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 510b4d8f1c..49d41af953 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -193,62 +193,30 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq,
_mm256_loadu_si256((void *)&sw_ring[i + 4]));
#endif
- __m256i raw_desc0_1, raw_desc2_3, raw_desc4_5, raw_desc6_7;
-#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- /* for AVX we need alignment otherwise loads are not atomic */
- if (avx_aligned) {
- /* load in descriptors, 2 at a time, in reverse order */
- raw_desc6_7 = _mm256_load_si256((void *)(rxdp + 6));
- rte_compiler_barrier();
- raw_desc4_5 = _mm256_load_si256((void *)(rxdp + 4));
- rte_compiler_barrier();
- raw_desc2_3 = _mm256_load_si256((void *)(rxdp + 2));
- rte_compiler_barrier();
- raw_desc0_1 = _mm256_load_si256((void *)(rxdp + 0));
- } else
-#endif
- {
- const __m128i raw_desc7 =
- _mm_load_si128((void *)(rxdp + 7));
- rte_compiler_barrier();
- const __m128i raw_desc6 =
- _mm_load_si128((void *)(rxdp + 6));
- rte_compiler_barrier();
- const __m128i raw_desc5 =
- _mm_load_si128((void *)(rxdp + 5));
- rte_compiler_barrier();
- const __m128i raw_desc4 =
- _mm_load_si128((void *)(rxdp + 4));
- rte_compiler_barrier();
- const __m128i raw_desc3 =
- _mm_load_si128((void *)(rxdp + 3));
- rte_compiler_barrier();
- const __m128i raw_desc2 =
- _mm_load_si128((void *)(rxdp + 2));
- rte_compiler_barrier();
- const __m128i raw_desc1 =
- _mm_load_si128((void *)(rxdp + 1));
- rte_compiler_barrier();
- const __m128i raw_desc0 =
- _mm_load_si128((void *)(rxdp + 0));
-
- raw_desc6_7 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc6),
- raw_desc7, 1);
- raw_desc4_5 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc4),
- raw_desc5, 1);
- raw_desc2_3 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc2),
- raw_desc3, 1);
- raw_desc0_1 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc0),
- raw_desc1, 1);
- }
+ const __m128i raw_desc7 = _mm_load_si128((void *)(rxdp + 7));
+ rte_compiler_barrier();
+ const __m128i raw_desc6 = _mm_load_si128((void *)(rxdp + 6));
+ rte_compiler_barrier();
+ const __m128i raw_desc5 = _mm_load_si128((void *)(rxdp + 5));
+ rte_compiler_barrier();
+ const __m128i raw_desc4 = _mm_load_si128((void *)(rxdp + 4));
+ rte_compiler_barrier();
+ const __m128i raw_desc3 = _mm_load_si128((void *)(rxdp + 3));
+ rte_compiler_barrier();
+ const __m128i raw_desc2 = _mm_load_si128((void *)(rxdp + 2));
+ rte_compiler_barrier();
+ const __m128i raw_desc1 = _mm_load_si128((void *)(rxdp + 1));
+ rte_compiler_barrier();
+ const __m128i raw_desc0 = _mm_load_si128((void *)(rxdp + 0));
+
+ const __m256i raw_desc6_7 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc6), raw_desc7, 1);
+ const __m256i raw_desc4_5 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc4), raw_desc5, 1);
+ const __m256i raw_desc2_3 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc2), raw_desc3, 1);
+ const __m256i raw_desc0_1 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc0), raw_desc1, 1);
if (split_packet) {
int j;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.929289815 +0800
+++ 0026-net-iavf-remove-incorrect-16B-descriptor-read-block.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From d4ade5d02d188fcbe51871c5a5d66ef075ca0f86 Mon Sep 17 00:00:00 2001
+From 72093d3d41b3a9fcad9010accc7f55e79f205cc9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d4ade5d02d188fcbe51871c5a5d66ef075ca0f86 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ice: remove incorrect 16B descriptor read block' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (24 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/iavf: " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: fix inline device pointer check' " Xueming Li
` (97 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Anatoly Burakov, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=dbdcd8bb8532fb04458d131adc335cfbda7c70da
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From dbdcd8bb8532fb04458d131adc335cfbda7c70da Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Tue, 23 Jan 2024 11:40:52 +0000
Subject: [PATCH] net/ice: remove incorrect 16B descriptor read block
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9aee908eddeb6e8f3de402ac5661bca5161809a6 ]
By default, the driver works with 32B descriptors, but has a separate
descriptor read block for reading two descriptors at a time when using
16B descriptors. However, the 32B reads used are not guaranteed to be
atomic, which will cause issues if that is not the case on a system,
since the descriptors may be read in an undefined order. Remove the
block, to avoid issues, and just use the regular descriptor reading path
for 16B descriptors, if that support is enabled at build time.
Fixes: ae60d3c9b227 ("net/ice: support Rx AVX2 vector")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/net/ice/ice_rxtx_vec_avx2.c | 80 +++++++++--------------------
1 file changed, 24 insertions(+), 56 deletions(-)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 6f6d790967..d6e88dbb29 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -254,62 +254,30 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
_mm256_loadu_si256((void *)&sw_ring[i + 4]));
#endif
- __m256i raw_desc0_1, raw_desc2_3, raw_desc4_5, raw_desc6_7;
-#ifdef RTE_LIBRTE_ICE_16BYTE_RX_DESC
- /* for AVX we need alignment otherwise loads are not atomic */
- if (avx_aligned) {
- /* load in descriptors, 2 at a time, in reverse order */
- raw_desc6_7 = _mm256_load_si256((void *)(rxdp + 6));
- rte_compiler_barrier();
- raw_desc4_5 = _mm256_load_si256((void *)(rxdp + 4));
- rte_compiler_barrier();
- raw_desc2_3 = _mm256_load_si256((void *)(rxdp + 2));
- rte_compiler_barrier();
- raw_desc0_1 = _mm256_load_si256((void *)(rxdp + 0));
- } else
-#endif
- {
- const __m128i raw_desc7 =
- _mm_load_si128((void *)(rxdp + 7));
- rte_compiler_barrier();
- const __m128i raw_desc6 =
- _mm_load_si128((void *)(rxdp + 6));
- rte_compiler_barrier();
- const __m128i raw_desc5 =
- _mm_load_si128((void *)(rxdp + 5));
- rte_compiler_barrier();
- const __m128i raw_desc4 =
- _mm_load_si128((void *)(rxdp + 4));
- rte_compiler_barrier();
- const __m128i raw_desc3 =
- _mm_load_si128((void *)(rxdp + 3));
- rte_compiler_barrier();
- const __m128i raw_desc2 =
- _mm_load_si128((void *)(rxdp + 2));
- rte_compiler_barrier();
- const __m128i raw_desc1 =
- _mm_load_si128((void *)(rxdp + 1));
- rte_compiler_barrier();
- const __m128i raw_desc0 =
- _mm_load_si128((void *)(rxdp + 0));
-
- raw_desc6_7 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc6),
- raw_desc7, 1);
- raw_desc4_5 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc4),
- raw_desc5, 1);
- raw_desc2_3 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc2),
- raw_desc3, 1);
- raw_desc0_1 =
- _mm256_inserti128_si256
- (_mm256_castsi128_si256(raw_desc0),
- raw_desc1, 1);
- }
+ const __m128i raw_desc7 = _mm_load_si128((void *)(rxdp + 7));
+ rte_compiler_barrier();
+ const __m128i raw_desc6 = _mm_load_si128((void *)(rxdp + 6));
+ rte_compiler_barrier();
+ const __m128i raw_desc5 = _mm_load_si128((void *)(rxdp + 5));
+ rte_compiler_barrier();
+ const __m128i raw_desc4 = _mm_load_si128((void *)(rxdp + 4));
+ rte_compiler_barrier();
+ const __m128i raw_desc3 = _mm_load_si128((void *)(rxdp + 3));
+ rte_compiler_barrier();
+ const __m128i raw_desc2 = _mm_load_si128((void *)(rxdp + 2));
+ rte_compiler_barrier();
+ const __m128i raw_desc1 = _mm_load_si128((void *)(rxdp + 1));
+ rte_compiler_barrier();
+ const __m128i raw_desc0 = _mm_load_si128((void *)(rxdp + 0));
+
+ const __m256i raw_desc6_7 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc6), raw_desc7, 1);
+ const __m256i raw_desc4_5 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc4), raw_desc5, 1);
+ const __m256i raw_desc2_3 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc2), raw_desc3, 1);
+ const __m256i raw_desc0_1 =
+ _mm256_inserti128_si256(_mm256_castsi128_si256(raw_desc0), raw_desc1, 1);
if (split_packet) {
int j;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.962044072 +0800
+++ 0027-net-ice-remove-incorrect-16B-descriptor-read-block.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From 9aee908eddeb6e8f3de402ac5661bca5161809a6 Mon Sep 17 00:00:00 2001
+From dbdcd8bb8532fb04458d131adc335cfbda7c70da Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9aee908eddeb6e8f3de402ac5661bca5161809a6 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: fix inline device pointer check' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (25 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/ice: " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: fix Rx packet format check condition' " Xueming Li
` (96 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rahul Bhansali; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=8bc81d544729171ea2faed5d394b6f9fed0979a5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 8bc81d544729171ea2faed5d394b6f9fed0979a5 Mon Sep 17 00:00:00 2001
From: Rahul Bhansali <rbhansali@marvell.com>
Date: Thu, 22 Feb 2024 15:37:27 +0530
Subject: [PATCH] common/cnxk: fix inline device pointer check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c4e9c066ab8c03a27183b03d29d4fce7c4b462d5 ]
Add missing check of Inline device pointer before accessing
is_multi_channel variable.
Fixes: 7ea187184a51 ("common/cnxk: support 1-N pool-aura per NIX LF")
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 750fd08355..a7bae8a51c 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -876,7 +876,8 @@ roc_nix_inl_inb_init(struct roc_nix *roc_nix)
inl_dev = idev->nix_inl_dev;
roc_nix->custom_meta_aura_ena = (roc_nix->local_meta_aura_ena &&
- (inl_dev->is_multi_channel || roc_nix->custom_sa_action));
+ ((inl_dev && inl_dev->is_multi_channel) ||
+ roc_nix->custom_sa_action));
if (!roc_model_is_cn9k() && !roc_errata_nix_no_meta_aura()) {
nix->need_meta_aura = true;
if (!roc_nix->local_meta_aura_ena || roc_nix->custom_meta_aura_ena)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:05.987137539 +0800
+++ 0028-common-cnxk-fix-inline-device-pointer-check.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From c4e9c066ab8c03a27183b03d29d4fce7c4b462d5 Mon Sep 17 00:00:00 2001
+From 8bc81d544729171ea2faed5d394b6f9fed0979a5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c4e9c066ab8c03a27183b03d29d4fce7c4b462d5 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index de8fd2a605..a205c658e9 100644
+index 750fd08355..a7bae8a51c 100644
@@ -21 +23 @@
-@@ -933,7 +933,8 @@ roc_nix_inl_inb_init(struct roc_nix *roc_nix)
+@@ -876,7 +876,8 @@ roc_nix_inl_inb_init(struct roc_nix *roc_nix)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: fix Rx packet format check condition' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (26 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: fix inline device pointer check' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/bnx2x: fix warnings about memcpy lengths' " Xueming Li
` (95 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rahul Bhansali; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2d11f389b0f8fe86387c2d7ddc33e9230dfedd11
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2d11f389b0f8fe86387c2d7ddc33e9230dfedd11 Mon Sep 17 00:00:00 2001
From: Rahul Bhansali <rbhansali@marvell.com>
Date: Thu, 22 Feb 2024 15:37:28 +0530
Subject: [PATCH] net/cnxk: fix Rx packet format check condition
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f9e6c013564967de5e76f45a81dc593361f2ccdd ]
For IPsec decrypted packets, full packet format condition check
is enabled for both reassembly and non-reassembly path as part
of OOP handling. Instead, it should be only in reassembly path.
To fix this, NIX_RX_REAS_F flag condition is added to avoid
packet format check in non-reassembly fast path.
Fixes: 5e9e008d0127 ("net/cnxk: support inline ingress out-of-place session")
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/net/cnxk/cn10k_rx.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 7bb4c86d75..86e4233dc7 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -705,7 +705,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
if (cq_w1 & BIT(11) && flags & NIX_RX_OFFLOAD_SECURITY_F) {
const uint64_t *wqe = (const uint64_t *)(mbuf + 1);
- if (hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL)
+ if (!(flags & NIX_RX_REAS_F) || hdr->w0.pkt_fmt != ROC_IE_OT_SA_PKT_FMT_FULL)
rx = (const union nix_rx_parse_u *)(wqe + 1);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.011927707 +0800
+++ 0029-net-cnxk-fix-Rx-packet-format-check-condition.patch 2024-04-13 20:43:04.937754010 +0800
@@ -1 +1 @@
-From f9e6c013564967de5e76f45a81dc593361f2ccdd Mon Sep 17 00:00:00 2001
+From 2d11f389b0f8fe86387c2d7ddc33e9230dfedd11 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f9e6c013564967de5e76f45a81dc593361f2ccdd ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index c4ad1b64fe..89621af3fb 100644
+index 7bb4c86d75..86e4233dc7 100644
@@ -24,3 +26,3 @@
-@@ -734,7 +734,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
- else
- wqe = (const uint64_t *)(mbuf + 1);
+@@ -705,7 +705,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
+ if (cq_w1 & BIT(11) && flags & NIX_RX_OFFLOAD_SECURITY_F) {
+ const uint64_t *wqe = (const uint64_t *)(mbuf + 1);
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/bnx2x: fix warnings about memcpy lengths' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (27 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: fix Rx packet format check condition' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: remove CN9K inline IPsec FP opcodes' " Xueming Li
` (94 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Morten Brørup; +Cc: Devendra Singh Rawat, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b3ef799286fdab7c12d8b0a7f77a61aa45500f06
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b3ef799286fdab7c12d8b0a7f77a61aa45500f06 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>
Date: Fri, 23 Feb 2024 15:00:56 +0100
Subject: [PATCH] net/bnx2x: fix warnings about memcpy lengths
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c50b86f7d60f757ea62fe14076be69bf114f1740 ]
The vlan in the bulletin does not contain a VLAN header, only the
VLAN ID, so only copy 2 byte, not 4. The target structure has padding
after the field, so copying 2 byte too many is effectively harmless.
Fix it by using generic memcpy version instead of specialized
rte version as it not used in fast path.
Also, Use RTE_PTR_ADD where copying arrays to the offset of a first field
in a structure holding multiple fields, to avoid compiler warnings with
decorated memcpy.
Bugzilla ID: 1146
Fixes: 540a211084a7 ("bnx2x: driver core")
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
---
drivers/net/bnx2x/bnx2x_stats.c | 14 ++++++++------
drivers/net/bnx2x/bnx2x_vfpf.c | 14 +++++++-------
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c
index c07b01510a..69132c7c80 100644
--- a/drivers/net/bnx2x/bnx2x_stats.c
+++ b/drivers/net/bnx2x/bnx2x_stats.c
@@ -114,7 +114,7 @@ bnx2x_hw_stats_post(struct bnx2x_softc *sc)
/* Update MCP's statistics if possible */
if (sc->func_stx) {
- rte_memcpy(BNX2X_SP(sc, func_stats), &sc->func_stats,
+ memcpy(BNX2X_SP(sc, func_stats), &sc->func_stats,
sizeof(sc->func_stats));
}
@@ -817,10 +817,10 @@ bnx2x_hw_stats_update(struct bnx2x_softc *sc)
etherstatspktsover1522octets);
}
- rte_memcpy(old, new, sizeof(struct nig_stats));
+ memcpy(old, new, sizeof(struct nig_stats));
- rte_memcpy(&(estats->rx_stat_ifhcinbadoctets_hi), &(pstats->mac_stx[1]),
- sizeof(struct mac_stx));
+ memcpy(RTE_PTR_ADD(estats, offsetof(struct bnx2x_eth_stats, rx_stat_ifhcinbadoctets_hi)),
+ &pstats->mac_stx[1], sizeof(struct mac_stx));
estats->brb_drop_hi = pstats->brb_drop_hi;
estats->brb_drop_lo = pstats->brb_drop_lo;
@@ -1492,9 +1492,11 @@ bnx2x_stats_init(struct bnx2x_softc *sc)
REG_RD(sc, NIG_REG_STAT0_BRB_TRUNCATE + port*0x38);
if (!CHIP_IS_E3(sc)) {
REG_RD_DMAE(sc, NIG_REG_STAT0_EGRESS_MAC_PKT0 + port*0x50,
- &(sc->port.old_nig_stats.egress_mac_pkt0_lo), 2);
+ RTE_PTR_ADD(&sc->port.old_nig_stats,
+ offsetof(struct nig_stats, egress_mac_pkt0_lo)), 2);
REG_RD_DMAE(sc, NIG_REG_STAT0_EGRESS_MAC_PKT1 + port*0x50,
- &(sc->port.old_nig_stats.egress_mac_pkt1_lo), 2);
+ RTE_PTR_ADD(&sc->port.old_nig_stats,
+ offsetof(struct nig_stats, egress_mac_pkt1_lo)), 2);
}
/* function stats */
diff --git a/drivers/net/bnx2x/bnx2x_vfpf.c b/drivers/net/bnx2x/bnx2x_vfpf.c
index 63953c2979..5411df3a38 100644
--- a/drivers/net/bnx2x/bnx2x_vfpf.c
+++ b/drivers/net/bnx2x/bnx2x_vfpf.c
@@ -52,9 +52,9 @@ bnx2x_check_bull(struct bnx2x_softc *sc)
/* check the mac address and VLAN and allocate memory if valid */
if (valid_bitmap & (1 << MAC_ADDR_VALID) && memcmp(bull->mac, sc->old_bulletin.mac, ETH_ALEN))
- rte_memcpy(&sc->link_params.mac_addr, bull->mac, ETH_ALEN);
+ memcpy(&sc->link_params.mac_addr, bull->mac, ETH_ALEN);
if (valid_bitmap & (1 << VLAN_VALID))
- rte_memcpy(&bull->vlan, &sc->old_bulletin.vlan, RTE_VLAN_HLEN);
+ memcpy(&bull->vlan, &sc->old_bulletin.vlan, sizeof(bull->vlan));
sc->old_bulletin = *bull;
@@ -569,7 +569,7 @@ bnx2x_vf_set_mac(struct bnx2x_softc *sc, int set)
bnx2x_check_bull(sc);
- rte_memcpy(query->filters[0].mac, sc->link_params.mac_addr, ETH_ALEN);
+ memcpy(query->filters[0].mac, sc->link_params.mac_addr, ETH_ALEN);
bnx2x_add_tlv(sc, query, query->first_tlv.tl.length,
BNX2X_VF_TLV_LIST_END,
@@ -583,9 +583,9 @@ bnx2x_vf_set_mac(struct bnx2x_softc *sc, int set)
while (BNX2X_VF_STATUS_FAILURE == reply->status &&
bnx2x_check_bull(sc)) {
/* A new mac was configured by PF for us */
- rte_memcpy(sc->link_params.mac_addr, sc->pf2vf_bulletin->mac,
+ memcpy(sc->link_params.mac_addr, sc->pf2vf_bulletin->mac,
ETH_ALEN);
- rte_memcpy(query->filters[0].mac, sc->pf2vf_bulletin->mac,
+ memcpy(query->filters[0].mac, sc->pf2vf_bulletin->mac,
ETH_ALEN);
rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr);
@@ -622,10 +622,10 @@ bnx2x_vf_config_rss(struct bnx2x_softc *sc,
BNX2X_VF_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
- rte_memcpy(query->rss_key, params->rss_key, sizeof(params->rss_key));
+ memcpy(query->rss_key, params->rss_key, sizeof(params->rss_key));
query->rss_key_size = T_ETH_RSS_KEY;
- rte_memcpy(query->ind_table, params->ind_table, T_ETH_INDIRECTION_TABLE_SIZE);
+ memcpy(query->ind_table, params->ind_table, T_ETH_INDIRECTION_TABLE_SIZE);
query->ind_table_size = T_ETH_INDIRECTION_TABLE_SIZE;
query->rss_result_mask = params->rss_result_mask;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.035532376 +0800
+++ 0030-net-bnx2x-fix-warnings-about-memcpy-lengths.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From c50b86f7d60f757ea62fe14076be69bf114f1740 Mon Sep 17 00:00:00 2001
+From b3ef799286fdab7c12d8b0a7f77a61aa45500f06 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c50b86f7d60f757ea62fe14076be69bf114f1740 ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: remove CN9K inline IPsec FP opcodes' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (28 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/bnx2x: fix warnings about memcpy lengths' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: fix buffer size configuration' " Xueming Li
` (93 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fbfaa5ae043e9902427af18b0f4802235f6a60a7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fbfaa5ae043e9902427af18b0f4802235f6a60a7 Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Mon, 26 Feb 2024 19:05:23 +0530
Subject: [PATCH] common/cnxk: remove CN9K inline IPsec FP opcodes
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 930d94170e044ce1a2a2f222306c7dad50898728 ]
Since now Inline IPsec in cn9k is using same opcode as LA,
remove the definitions of fast path opcode.
Also fix devarg handling for ipsec_out_max_sa to allow 32-bit.
Fixes: fe5846bcc076 ("net/cnxk: add devargs for min-max SPI")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/cnxk_security.c | 229 -------------------------
drivers/common/cnxk/cnxk_security.h | 12 --
drivers/common/cnxk/roc_ie_on.h | 60 -------
drivers/common/cnxk/roc_nix_inl.h | 50 +-----
drivers/common/cnxk/version.map | 4 -
drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
6 files changed, 3 insertions(+), 354 deletions(-)
diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index a8c3ba90cd..40685d0912 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -618,235 +618,6 @@ cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
return !!sa->w2.s.valid;
}
-static inline int
-ipsec_xfrm_verify(struct rte_security_ipsec_xform *ipsec_xfrm,
- struct rte_crypto_sym_xform *crypto_xfrm)
-{
- if (crypto_xfrm->next == NULL)
- return -EINVAL;
-
- if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- } else {
- if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int
-onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
- uint8_t *cipher_key, uint8_t *hmac_opad_ipad,
- struct rte_security_ipsec_xform *ipsec_xfrm,
- struct rte_crypto_sym_xform *crypto_xfrm)
-{
- struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
- int rc, length, auth_key_len;
- const uint8_t *key = NULL;
- uint8_t ccm_flag = 0;
-
- /* Set direction */
- switch (ipsec_xfrm->direction) {
- case RTE_SECURITY_IPSEC_SA_DIR_INGRESS:
- ctl->direction = ROC_IE_SA_DIR_INBOUND;
- auth_xfrm = crypto_xfrm;
- cipher_xfrm = crypto_xfrm->next;
- break;
- case RTE_SECURITY_IPSEC_SA_DIR_EGRESS:
- ctl->direction = ROC_IE_SA_DIR_OUTBOUND;
- cipher_xfrm = crypto_xfrm;
- auth_xfrm = crypto_xfrm->next;
- break;
- default:
- return -EINVAL;
- }
-
- /* Set protocol - ESP vs AH */
- switch (ipsec_xfrm->proto) {
- case RTE_SECURITY_IPSEC_SA_PROTO_ESP:
- ctl->ipsec_proto = ROC_IE_SA_PROTOCOL_ESP;
- break;
- case RTE_SECURITY_IPSEC_SA_PROTO_AH:
- return -ENOTSUP;
- default:
- return -EINVAL;
- }
-
- /* Set mode - transport vs tunnel */
- switch (ipsec_xfrm->mode) {
- case RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT:
- ctl->ipsec_mode = ROC_IE_SA_MODE_TRANSPORT;
- break;
- case RTE_SECURITY_IPSEC_SA_MODE_TUNNEL:
- ctl->ipsec_mode = ROC_IE_SA_MODE_TUNNEL;
- break;
- default:
- return -EINVAL;
- }
-
- /* Set encryption algorithm */
- if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- length = crypto_xfrm->aead.key.length;
-
- switch (crypto_xfrm->aead.algo) {
- case RTE_CRYPTO_AEAD_AES_GCM:
- ctl->enc_type = ROC_IE_ON_SA_ENC_AES_GCM;
- ctl->auth_type = ROC_IE_ON_SA_AUTH_NULL;
- memcpy(salt, &ipsec_xfrm->salt, 4);
- key = crypto_xfrm->aead.key.data;
- break;
- case RTE_CRYPTO_AEAD_AES_CCM:
- ctl->enc_type = ROC_IE_ON_SA_ENC_AES_CCM;
- ctl->auth_type = ROC_IE_ON_SA_AUTH_NULL;
- ccm_flag = 0x07 & ~ROC_CPT_AES_CCM_CTR_LEN;
- *salt = ccm_flag;
- memcpy(PLT_PTR_ADD(salt, 1), &ipsec_xfrm->salt, 3);
- key = crypto_xfrm->aead.key.data;
- break;
- default:
- return -ENOTSUP;
- }
-
- } else {
- rc = ipsec_xfrm_verify(ipsec_xfrm, crypto_xfrm);
- if (rc)
- return rc;
-
- switch (cipher_xfrm->cipher.algo) {
- case RTE_CRYPTO_CIPHER_AES_CBC:
- ctl->enc_type = ROC_IE_ON_SA_ENC_AES_CBC;
- break;
- case RTE_CRYPTO_CIPHER_AES_CTR:
- ctl->enc_type = ROC_IE_ON_SA_ENC_AES_CTR;
- break;
- default:
- return -ENOTSUP;
- }
-
- switch (auth_xfrm->auth.algo) {
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = ROC_IE_ON_SA_AUTH_SHA1;
- break;
- default:
- return -ENOTSUP;
- }
- auth_key_len = auth_xfrm->auth.key.length;
- if (auth_key_len < 20 || auth_key_len > 64)
- return -ENOTSUP;
-
- key = cipher_xfrm->cipher.key.data;
- length = cipher_xfrm->cipher.key.length;
-
- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
- }
-
- switch (length) {
- case ROC_CPT_AES128_KEY_LEN:
- ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_128;
- break;
- case ROC_CPT_AES192_KEY_LEN:
- ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_192;
- break;
- case ROC_CPT_AES256_KEY_LEN:
- ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- memcpy(cipher_key, key, length);
-
- if (ipsec_xfrm->options.esn)
- ctl->esn_en = 1;
-
- ctl->spi = rte_cpu_to_be_32(ipsec_xfrm->spi);
- return 0;
-}
-
-int
-cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
- struct rte_security_ipsec_xform *ipsec_xfrm,
- struct rte_crypto_sym_xform *crypto_xfrm)
-{
- struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
- int rc;
-
- rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
- sa->hmac_key, ipsec_xfrm,
- crypto_xfrm);
- if (rc)
- return rc;
-
- rte_wmb();
-
- /* Enable SA */
- ctl->valid = 1;
- return 0;
-}
-
-int
-cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
- struct rte_security_ipsec_xform *ipsec_xfrm,
- struct rte_crypto_sym_xform *crypto_xfrm)
-{
- struct rte_security_ipsec_tunnel_param *tunnel = &ipsec_xfrm->tunnel;
- struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
- int rc;
-
- /* Fill common params */
- rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
- sa->hmac_key, ipsec_xfrm,
- crypto_xfrm);
- if (rc)
- return rc;
-
- if (ipsec_xfrm->mode != RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- goto skip_tunnel_info;
-
- /* Tunnel header info */
- switch (tunnel->type) {
- case RTE_SECURITY_IPSEC_TUNNEL_IPV4:
- memcpy(&sa->ip_src, &tunnel->ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&sa->ip_dst, &tunnel->ipv4.dst_ip,
- sizeof(struct in_addr));
- break;
- case RTE_SECURITY_IPSEC_TUNNEL_IPV6:
- return -ENOTSUP;
- default:
- return -EINVAL;
- }
-
- /* Update udp encap ports */
- if (ipsec_xfrm->options.udp_encap == 1) {
- sa->udp_src = 4500;
- sa->udp_dst = 4500;
- }
-
-skip_tunnel_info:
- rte_wmb();
-
- /* Enable SA */
- ctl->valid = 1;
- return 0;
-}
-
-bool
-cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa)
-{
- return !!sa->ctl.valid;
-}
-
-bool
-cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa)
-{
- return !!sa->ctl.valid;
-}
-
uint8_t
cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
enum rte_crypto_auth_algorithm a_algo,
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 2277ce9144..72628ef3b8 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -48,18 +48,6 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
bool __roc_api cnxk_ot_ipsec_inb_sa_valid(struct roc_ot_ipsec_inb_sa *sa);
bool __roc_api cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa);
-/* [CN9K, CN10K) */
-int __roc_api
-cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
- struct rte_security_ipsec_xform *ipsec_xfrm,
- struct rte_crypto_sym_xform *crypto_xfrm);
-int __roc_api
-cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
- struct rte_security_ipsec_xform *ipsec_xfrm,
- struct rte_crypto_sym_xform *crypto_xfrm);
-bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
-bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
-
/* [CN9K] */
int __roc_api
cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
diff --git a/drivers/common/cnxk/roc_ie_on.h b/drivers/common/cnxk/roc_ie_on.h
index 9933ffa148..11c995e9d1 100644
--- a/drivers/common/cnxk/roc_ie_on.h
+++ b/drivers/common/cnxk/roc_ie_on.h
@@ -269,66 +269,6 @@ struct roc_ie_on_inb_sa {
#define ROC_IE_ON_UCC_L2_HDR_INFO_ERR 0xCF
#define ROC_IE_ON_UCC_L2_HDR_LEN_ERR 0xE0
-struct roc_ie_onf_sa_ctl {
- uint32_t spi;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_41_40 : 2;
- /* Disable SPI, SEQ data in RPTR for Inbound inline */
- uint64_t spi_seq_dis : 1;
- uint64_t esn_en : 1;
- uint64_t rsvd_44_45 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct roc_onf_ipsec_outb_sa {
- /* w0 */
- struct roc_ie_onf_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4];
- uint16_t udp_src;
- uint16_t udp_dst;
-
- /* w2 */
- uint32_t ip_src;
- uint32_t ip_dst;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-};
-
-struct roc_onf_ipsec_inb_sa {
- /* w0 */
- struct roc_ie_onf_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4]; /* Only for AES-GCM */
- uint32_t unused;
-
- /* w2 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-};
-
#define ROC_ONF_IPSEC_INB_MAX_L2_SZ 32UL
#define ROC_ONF_IPSEC_OUTB_MAX_L2_SZ 30UL
#define ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ (ROC_ONF_IPSEC_OUTB_MAX_L2_SZ + 2)
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index ab1e9c0f98..f5ce26f03f 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -4,24 +4,6 @@
#ifndef _ROC_NIX_INL_H_
#define _ROC_NIX_INL_H_
-/* ONF INB HW area */
-#define ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ \
- PLT_ALIGN(sizeof(struct roc_onf_ipsec_inb_sa), ROC_ALIGN)
-/* ONF INB SW reserved area */
-#define ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD 384
-#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ \
- (ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD)
-#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2 9
-
-/* ONF OUTB HW area */
-#define ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ \
- PLT_ALIGN(sizeof(struct roc_onf_ipsec_outb_sa), ROC_ALIGN)
-/* ONF OUTB SW reserved area */
-#define ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD 128
-#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ \
- (ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD)
-#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2 8
-
/* ON INB HW area */
#define ROC_NIX_INL_ON_IPSEC_INB_HW_SZ \
PLT_ALIGN(sizeof(struct roc_ie_on_inb_sa), ROC_ALIGN)
@@ -31,10 +13,10 @@
(ROC_NIX_INL_ON_IPSEC_INB_HW_SZ + ROC_NIX_INL_ON_IPSEC_INB_SW_RSVD)
#define ROC_NIX_INL_ON_IPSEC_INB_SA_SZ_LOG2 10
-/* ONF OUTB HW area */
+/* ON OUTB HW area */
#define ROC_NIX_INL_ON_IPSEC_OUTB_HW_SZ \
PLT_ALIGN(sizeof(struct roc_ie_on_outb_sa), ROC_ALIGN)
-/* ONF OUTB SW reserved area */
+/* ON OUTB SW reserved area */
#define ROC_NIX_INL_ON_IPSEC_OUTB_SW_RSVD 256
#define ROC_NIX_INL_ON_IPSEC_OUTB_SA_SZ \
(ROC_NIX_INL_ON_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_ON_IPSEC_OUTB_SW_RSVD)
@@ -86,34 +68,6 @@ roc_nix_inl_on_ipsec_outb_sa_sw_rsvd(void *sa)
return PLT_PTR_ADD(sa, ROC_NIX_INL_ON_IPSEC_OUTB_HW_SZ);
}
-static inline struct roc_onf_ipsec_inb_sa *
-roc_nix_inl_onf_ipsec_inb_sa(uintptr_t base, uint64_t idx)
-{
- uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2;
-
- return PLT_PTR_ADD(base, off);
-}
-
-static inline struct roc_onf_ipsec_outb_sa *
-roc_nix_inl_onf_ipsec_outb_sa(uintptr_t base, uint64_t idx)
-{
- uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2;
-
- return PLT_PTR_ADD(base, off);
-}
-
-static inline void *
-roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(void *sa)
-{
- return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ);
-}
-
-static inline void *
-roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(void *sa)
-{
- return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ);
-}
-
/* Inline device SSO Work callback */
typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args,
uint32_t soft_exp_event);
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index aa884a8fe2..e718c13acb 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -17,10 +17,6 @@ INTERNAL {
cnxk_logtype_sso;
cnxk_logtype_tim;
cnxk_logtype_tm;
- cnxk_onf_ipsec_inb_sa_fill;
- cnxk_onf_ipsec_outb_sa_fill;
- cnxk_onf_ipsec_inb_sa_valid;
- cnxk_onf_ipsec_outb_sa_valid;
cnxk_ot_ipsec_inb_sa_fill;
cnxk_ot_ipsec_outb_sa_fill;
cnxk_ot_ipsec_inb_sa_valid;
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 8e862be933..a0e9300cff 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -75,7 +75,7 @@ parse_ipsec_out_max_sa(const char *key, const char *value, void *extra_args)
if (errno)
val = 0;
- *(uint16_t *)extra_args = val;
+ *(uint32_t *)extra_args = val;
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.064692038 +0800
+++ 0031-common-cnxk-remove-CN9K-inline-IPsec-FP-opcodes.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From 930d94170e044ce1a2a2f222306c7dad50898728 Mon Sep 17 00:00:00 2001
+From fbfaa5ae043e9902427af18b0f4802235f6a60a7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 930d94170e044ce1a2a2f222306c7dad50898728 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -16 +18 @@
- drivers/common/cnxk/cnxk_security.c | 230 -------------------------
+ drivers/common/cnxk/cnxk_security.c | 229 -------------------------
@@ -22 +24 @@
- 6 files changed, 3 insertions(+), 355 deletions(-)
+ 6 files changed, 3 insertions(+), 354 deletions(-)
@@ -25 +27 @@
-index 64c901a57a..bab015e3b3 100644
+index a8c3ba90cd..40685d0912 100644
@@ -28 +30 @@
-@@ -574,236 +574,6 @@ cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
+@@ -618,235 +618,6 @@ cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
@@ -155,2 +157 @@
-- roc_se_hmac_opad_ipad_gen(ctl->auth_type, auth_xfrm->auth.key.data,
-- auth_xfrm->auth.key.length, hmac_opad_ipad, ROC_SE_IPSEC);
+- ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
@@ -266 +267 @@
-index b323b8b757..19eb9bb03d 100644
+index 2277ce9144..72628ef3b8 100644
@@ -286,2 +287,2 @@
- int __roc_api cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
+ int __roc_api
+ cnxk_on_ipsec_inb_sa_create(struct rte_security_ipsec_xform *ipsec,
@@ -360 +361 @@
-index a89b40ff61..8acd7e0545 100644
+index ab1e9c0f98..f5ce26f03f 100644
@@ -437 +438 @@
-index 892fcb1f0d..73fd890f20 100644
+index aa884a8fe2..e718c13acb 100644
@@ -452 +453 @@
-index 50dc80ce2c..1bab19fc23 100644
+index 8e862be933..a0e9300cff 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: fix buffer size configuration' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (29 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: remove CN9K inline IPsec FP opcodes' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: fix Tx MTU " Xueming Li
` (92 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e71ac13a3825273e9cfec20cf4410dd0d4db2ebb
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e71ac13a3825273e9cfec20cf4410dd0d4db2ebb Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Mon, 26 Feb 2024 19:05:28 +0530
Subject: [PATCH] net/cnxk: fix buffer size configuration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3e12147f71ffb96595bf35a92f3f3741ae9d91bb ]
In case where cnxk_nix_mtu_set() is called before
data->min_rx_buf_size is set, use buf size from first RQ's
mempool.
Fixes: 34b46320f446 ("net/cnxk: perform early MTU setup for event mode")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev_ops.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 5de2919047..ec72c32826 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -544,8 +544,9 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
struct rte_eth_dev_data *data = eth_dev->data;
struct roc_nix *nix = &dev->nix;
+ struct cnxk_eth_rxq_sp *rxq_sp;
+ uint32_t buffsz = 0;
int rc = -EINVAL;
- uint32_t buffsz;
frame_size += CNXK_NIX_TIMESYNC_RX_OFFSET * dev->ptp_en;
@@ -561,8 +562,24 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
- old_frame_size = data->mtu + CNXK_NIX_L2_OVERHEAD;
+ if (!eth_dev->data->nb_rx_queues)
+ goto skip_buffsz_check;
+
+ /* Perform buff size check */
+ if (data->min_rx_buf_size) {
+ buffsz = data->min_rx_buf_size;
+ } else if (eth_dev->data->rx_queues && eth_dev->data->rx_queues[0]) {
+ rxq_sp = cnxk_eth_rxq_to_sp(data->rx_queues[0]);
+
+ if (rxq_sp->qconf.mp)
+ buffsz = rte_pktmbuf_data_room_size(rxq_sp->qconf.mp);
+ }
+
+ /* Skip validation if RQ's are not yet setup */
+ if (!buffsz)
+ goto skip_buffsz_check;
+
+ buffsz -= RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
* when this feature has not been enabled before.
@@ -580,6 +597,8 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
+skip_buffsz_check:
+ old_frame_size = data->mtu + CNXK_NIX_L2_OVERHEAD;
/* if new MTU was smaller than old one, then flush all SQs before MTU change */
if (old_frame_size > frame_size) {
if (data->dev_started) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.097194496 +0800
+++ 0032-net-cnxk-fix-buffer-size-configuration.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From 3e12147f71ffb96595bf35a92f3f3741ae9d91bb Mon Sep 17 00:00:00 2001
+From e71ac13a3825273e9cfec20cf4410dd0d4db2ebb Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3e12147f71ffb96595bf35a92f3f3741ae9d91bb ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index e9ab8da781..e816884d47 100644
+index 5de2919047..ec72c32826 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: fix Tx MTU configuration' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (30 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: fix buffer size configuration' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: fix MTU limit' " Xueming Li
` (91 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3e73021b35d56bd1f17648047fa2abd439d653bf
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3e73021b35d56bd1f17648047fa2abd439d653bf Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Mon, 26 Feb 2024 19:05:29 +0530
Subject: [PATCH] common/cnxk: fix Tx MTU configuration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit cc9f534f60815d858b946062cb1d9701c91b9b58 ]
Skip setting Tx MTU separately as now the Tx credit configuration
is based on max MTU possible for that link.
Also, initialize MTU with max value for that port.
Fixes: 8589ec212e80 ("net/cnxk: support MTU set")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.c | 2 +-
drivers/common/cnxk/roc_nix.h | 2 --
drivers/net/cnxk/cnxk_ethdev_ops.c | 12 +-----------
3 files changed, 2 insertions(+), 14 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index f64933a1d9..afbc3eb901 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -482,7 +482,7 @@ skip_dev_init:
sdp_lbk_id_update(pci_dev, nix);
nix->pci_dev = pci_dev;
nix->reta_sz = reta_sz;
- nix->mtu = ROC_NIX_DEFAULT_HW_FRS;
+ nix->mtu = roc_nix_max_pkt_len(roc_nix);
nix->dmac_flt_idx = -1;
/* Register error and ras interrupts */
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index acdd1c4cbc..250d710c07 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -267,8 +267,6 @@ struct roc_nix_eeprom_info {
#define ROC_NIX_RSS_KEY_LEN 48 /* 352 Bits */
#define ROC_NIX_RSS_MCAM_IDX_DEFAULT (-1)
-#define ROC_NIX_DEFAULT_HW_FRS 1514
-
#define ROC_NIX_VWQE_MAX_SIZE_LOG2 11
#define ROC_NIX_VWQE_MIN_SIZE_LOG2 2
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index ec72c32826..2fdad8b4b6 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -610,19 +610,9 @@ skip_buffsz_check:
frame_size -= RTE_ETHER_CRC_LEN;
- /* Update mtu on Tx */
- rc = roc_nix_mac_mtu_set(nix, frame_size);
- if (rc) {
- plt_err("Failed to set MTU, rc=%d", rc);
- goto exit;
- }
-
- /* Sync same frame size on Rx */
+ /* Set frame size on Rx */
rc = roc_nix_mac_max_rx_len_set(nix, frame_size);
if (rc) {
- /* Rollback to older mtu */
- roc_nix_mac_mtu_set(nix,
- old_frame_size - RTE_ETHER_CRC_LEN);
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.120148366 +0800
+++ 0033-common-cnxk-fix-Tx-MTU-configuration.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From cc9f534f60815d858b946062cb1d9701c91b9b58 Mon Sep 17 00:00:00 2001
+From 3e73021b35d56bd1f17648047fa2abd439d653bf Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit cc9f534f60815d858b946062cb1d9701c91b9b58 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 97c0ae3e25..90ccb260fb 100644
+index f64933a1d9..afbc3eb901 100644
@@ -25 +27 @@
-@@ -484,7 +484,7 @@ skip_dev_init:
+@@ -482,7 +482,7 @@ skip_dev_init:
@@ -35 +37 @@
-index 2a198de458..4db71544f0 100644
+index acdd1c4cbc..250d710c07 100644
@@ -48 +50 @@
-index e816884d47..4962f3bced 100644
+index ec72c32826..2fdad8b4b6 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: fix MTU limit' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (31 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: fix Tx MTU " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: fix RSS RETA configuration' " Xueming Li
` (90 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f4c83ba01c38bf22ca2bb3d2ada958a0c4f041ec
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f4c83ba01c38bf22ca2bb3d2ada958a0c4f041ec Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com>
Date: Mon, 26 Feb 2024 19:05:30 +0530
Subject: [PATCH] net/cnxk: fix MTU limit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 402de2fd8729b61a3ace67c597e99893acb426d4 ]
Device can support maximum frame size up to 9212 bytes. While configuring
MTU, overhead is considered as ethernet header size, CRC and
2 * (VLAN tags) which translates to 26 bytes.
Exposed overhead to the user via rte_eth_dev_info() is 18 bytes which were
leading to set wrong Rx frame size.
Fixes: 8589ec212e80 ("net/cnxk: support MTU set")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev_ops.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 2fdad8b4b6..3c77f79fcc 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -20,8 +20,7 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
devinfo->max_mac_addrs = dev->max_mac_entries;
devinfo->max_vfs = pci_dev->max_vfs;
- devinfo->max_mtu = devinfo->max_rx_pktlen -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ devinfo->max_mtu = devinfo->max_rx_pktlen - CNXK_NIX_L2_OVERHEAD;
devinfo->min_mtu = devinfo->min_rx_bufsize - CNXK_NIX_L2_OVERHEAD;
devinfo->rx_offload_capa = dev->rx_offload_capa;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.148319129 +0800
+++ 0034-net-cnxk-fix-MTU-limit.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From 402de2fd8729b61a3ace67c597e99893acb426d4 Mon Sep 17 00:00:00 2001
+From f4c83ba01c38bf22ca2bb3d2ada958a0c4f041ec Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 402de2fd8729b61a3ace67c597e99893acb426d4 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 4962f3bced..56049c5dd2 100644
+index 2fdad8b4b6..3c77f79fcc 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: fix RSS RETA configuration' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (32 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: fix MTU limit' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: fix indirect mbuf handling in Tx' " Xueming Li
` (89 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Kommula Shiva Shankar; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6c6cd1fe5322873d1116f178d263673ab0c7757f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6c6cd1fe5322873d1116f178d263673ab0c7757f Mon Sep 17 00:00:00 2001
From: Kommula Shiva Shankar <kshankar@marvell.com>
Date: Mon, 26 Feb 2024 19:05:31 +0530
Subject: [PATCH] common/cnxk: fix RSS RETA configuration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit dce7f0c2400246da41049c64c0c461a24a4c0498 ]
Update queue entries copy in RETA table based on data type.
Fixes: 1bf6746e653b ("common/cnxk: support NIX RSS")
Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
---
| 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--git a/drivers/common/cnxk/roc_nix_rss.c b/drivers/common/cnxk/roc_nix_rss.c
index 3599eb9bae..2b88e1360d 100644
--- a/drivers/common/cnxk/roc_nix_rss.c
+++ b/drivers/common/cnxk/roc_nix_rss.c
@@ -196,7 +196,7 @@ roc_nix_rss_reta_set(struct roc_nix *roc_nix, uint8_t group,
if (rc)
return rc;
- memcpy(&nix->reta[group], reta, ROC_NIX_RSS_RETA_MAX);
+ memcpy(&nix->reta[group], reta, sizeof(uint16_t) * ROC_NIX_RSS_RETA_MAX);
return 0;
}
@@ -209,7 +209,7 @@ roc_nix_rss_reta_get(struct roc_nix *roc_nix, uint8_t group,
if (group >= ROC_NIX_RSS_GRPS)
return NIX_ERR_PARAM;
- memcpy(reta, &nix->reta[group], ROC_NIX_RSS_RETA_MAX);
+ memcpy(reta, &nix->reta[group], sizeof(uint16_t) * ROC_NIX_RSS_RETA_MAX);
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.175059894 +0800
+++ 0035-common-cnxk-fix-RSS-RETA-configuration.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From dce7f0c2400246da41049c64c0c461a24a4c0498 Mon Sep 17 00:00:00 2001
+From 6c6cd1fe5322873d1116f178d263673ab0c7757f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit dce7f0c2400246da41049c64c0c461a24a4c0498 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: fix indirect mbuf handling in Tx' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (33 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: fix RSS RETA configuration' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: add cookies check for multi-segment offload' " Xueming Li
` (88 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: Rahul Bhansali, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0e5159a2235eccd26418d422868a9301d0aaf741
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0e5159a2235eccd26418d422868a9301d0aaf741 Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Mon, 26 Feb 2024 19:05:32 +0530
Subject: [PATCH] net/cnxk: fix indirect mbuf handling in Tx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3232c95d2c361bdf5509cb9e9d0b9820398c1335 ]
Indirect mbuf can be pointing to data from different pool. Use the
correct AURA in NIX send header in SG2 and SG case.
Fixes: 862e28128707 ("net/cnxk: add vector Tx for CN9K")
Fixes: f71b7dbbf04b ("net/cnxk: add vector Tx for CN10K")
Fixes: 7e95c11df4f1 ("net/cnxk: add multi-segment Tx for CN9K")
Fixes: 3626d5195d49 ("net/cnxk: add multi-segment Tx for CN10K")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 6 +
drivers/net/cnxk/cn10k_rxtx.h | 1 +
drivers/net/cnxk/cn10k_tx.h | 269 +++++++++++++++++++--------
drivers/net/cnxk/cn9k_ethdev.c | 6 +
drivers/net/cnxk/cn9k_ethdev.h | 1 +
drivers/net/cnxk/cn9k_tx.h | 299 +++++++++++++++++++++---------
drivers/net/cnxk/cnxk_ethdev_dp.h | 10 +-
7 files changed, 414 insertions(+), 178 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 4a4e97287c..29b7f2ba5e 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -389,7 +389,13 @@ cn10k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
struct roc_nix_sq *sq = &dev->sqs[qidx];
do {
handle_tx_completion_pkts(txq, flags & NIX_TX_VWQE_F);
+ /* Check if SQ is empty */
roc_nix_sq_head_tail_get(nix, sq->qid, &head, &tail);
+ if (head != tail)
+ continue;
+
+ /* Check if completion CQ is empty */
+ roc_nix_cq_head_tail_get(nix, sq->cqid, &head, &tail);
} while (head != tail);
}
diff --git a/drivers/net/cnxk/cn10k_rxtx.h b/drivers/net/cnxk/cn10k_rxtx.h
index aeffc4ac92..9f33d0192e 100644
--- a/drivers/net/cnxk/cn10k_rxtx.h
+++ b/drivers/net/cnxk/cn10k_rxtx.h
@@ -177,6 +177,7 @@ handle_tx_completion_pkts(struct cn10k_eth_txq *txq, uint8_t mt_safe)
m = m_next;
}
rte_pktmbuf_free_seg(m);
+ txq->tx_compl.ptr[tx_compl_s0->sqe_id] = NULL;
head++;
head &= qmask;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 467f0ccc65..025eff2913 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -786,8 +786,9 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
static __rte_always_inline uint64_t
cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
- struct nix_send_hdr_s *send_hdr)
+ struct nix_send_hdr_s *send_hdr, uint64_t *aura)
{
+ struct rte_mbuf *prev = NULL;
uint32_t sqe_id;
if (RTE_MBUF_HAS_EXTBUF(m)) {
@@ -796,7 +797,10 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
return 1;
}
if (send_hdr->w0.pnc) {
- txq->tx_compl.ptr[send_hdr->w1.sqe_id]->next = m;
+ sqe_id = send_hdr->w1.sqe_id;
+ prev = txq->tx_compl.ptr[sqe_id];
+ m->next = prev;
+ txq->tx_compl.ptr[sqe_id] = m;
} else {
sqe_id = __atomic_fetch_add(&txq->tx_compl.sqe_id, 1, __ATOMIC_RELAXED);
send_hdr->w0.pnc = 1;
@@ -806,10 +810,155 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
}
return 1;
} else {
- return cnxk_nix_prefree_seg(m);
+ return cnxk_nix_prefree_seg(m, aura);
}
}
+#if defined(RTE_ARCH_ARM64)
+/* Only called for first segments of single segmented mbufs */
+static __rte_always_inline void
+cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
+ uint64x2_t *senddesc01_w0, uint64x2_t *senddesc23_w0,
+ uint64x2_t *senddesc01_w1, uint64x2_t *senddesc23_w1)
+{
+ struct rte_mbuf **tx_compl_ptr = txq->tx_compl.ptr;
+ uint32_t nb_desc_mask = txq->tx_compl.nb_desc_mask;
+ bool tx_compl_ena = txq->tx_compl.ena;
+ struct rte_mbuf *m0, *m1, *m2, *m3;
+ struct rte_mbuf *cookie;
+ uint64_t w0, w1, aura;
+ uint64_t sqe_id;
+
+ m0 = mbufs[0];
+ m1 = mbufs[1];
+ m2 = mbufs[2];
+ m3 = mbufs[3];
+
+ /* mbuf 0 */
+ w0 = vgetq_lane_u64(*senddesc01_w0, 0);
+ if (RTE_MBUF_HAS_EXTBUF(m0)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc01_w1, 0);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m0);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m0;
+ *senddesc01_w1 = vsetq_lane_u64(w1, *senddesc01_w1, 0);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m0) ? m0 : rte_mbuf_from_indirect(m0);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m0, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 0);
+
+ /* mbuf1 */
+ w0 = vgetq_lane_u64(*senddesc01_w0, 1);
+ if (RTE_MBUF_HAS_EXTBUF(m1)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc01_w1, 1);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m1);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m1;
+ *senddesc01_w1 = vsetq_lane_u64(w1, *senddesc01_w1, 1);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m1) ? m1 : rte_mbuf_from_indirect(m1);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m1, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 1);
+
+ /* mbuf 2 */
+ w0 = vgetq_lane_u64(*senddesc23_w0, 0);
+ if (RTE_MBUF_HAS_EXTBUF(m2)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc23_w1, 0);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m2);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m2;
+ *senddesc23_w1 = vsetq_lane_u64(w1, *senddesc23_w1, 0);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m2) ? m2 : rte_mbuf_from_indirect(m2);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m2, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 0);
+
+ /* mbuf3 */
+ w0 = vgetq_lane_u64(*senddesc23_w0, 1);
+ if (RTE_MBUF_HAS_EXTBUF(m3)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc23_w1, 1);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m3);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m3;
+ *senddesc23_w1 = vsetq_lane_u64(w1, *senddesc23_w1, 1);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m3) ? m3 : rte_mbuf_from_indirect(m3);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m3, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 1);
+#ifndef RTE_LIBRTE_MEMPOOL_DEBUG
+ RTE_SET_USED(cookie);
+#endif
+}
+#endif
+
static __rte_always_inline void
cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
{
@@ -889,6 +1038,9 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
sg = (union nix_send_sg_s *)(cmd + 2);
}
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+ send_hdr->w0.pnc = 0;
+
if (flags & (NIX_TX_NEED_SEND_HDR_W1 | NIX_TX_OFFLOAD_SECURITY_F)) {
ol_flags = m->ol_flags;
w1.u = 0;
@@ -1049,19 +1201,30 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
send_hdr->w1.u = w1.u;
if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ struct rte_mbuf *cookie;
+
sg->seg1_size = send_hdr->w0.total;
*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ uint64_t aura;
+
/* DF bit = 1 if refcount of current mbuf or parent mbuf
* is greater than 1
* DF bit = 0 otherwise
*/
- send_hdr->w0.df = cn10k_nix_prefree_seg(m, txq, send_hdr);
+ aura = send_hdr->w0.aura;
+ send_hdr->w0.df = cn10k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ send_hdr->w0.aura = aura;
}
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
/* Mark mempool object as "put" since it is freed by NIX */
if (!send_hdr->w0.df)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+#else
+ RTE_SET_USED(cookie);
+#endif
} else {
sg->seg1_size = m->data_len;
*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
@@ -1135,6 +1298,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
struct nix_send_hdr_s *send_hdr;
union nix_send_sg_s *sg, l_sg;
union nix_send_sg2_s l_sg2;
+ struct rte_mbuf *cookie;
struct rte_mbuf *m_next;
uint8_t off, is_sg2;
uint64_t len, dlen;
@@ -1163,21 +1327,26 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
len -= dlen;
nb_segs = m->nb_segs - 1;
m_next = m->next;
+ m->next = NULL;
slist = &cmd[3 + off + 1];
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- prefree = cn10k_nix_prefree_seg(m, txq, send_hdr);
+ aura = send_hdr->w0.aura;
+ prefree = cn10k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ send_hdr->w0.aura = aura;
l_sg.i1 = prefree;
}
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
/* Mark mempool object as "put" since it is freed by NIX */
if (!prefree)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
rte_io_wmb();
+#else
+ RTE_SET_USED(cookie);
#endif
- m->next = NULL;
/* Quickly handle single segmented packets. With this if-condition
* compiler will completely optimize out the below do-while loop
@@ -1207,9 +1376,12 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
aura = aura0;
prefree = 0;
+ m->next = NULL;
+
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
- prefree = cn10k_nix_prefree_seg(m, txq, send_hdr);
+ prefree = cn10k_nix_prefree_seg(m, txq, send_hdr, &aura);
is_sg2 = aura != aura0 && !prefree;
}
@@ -1259,13 +1431,14 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
l_sg.subdc = NIX_SUBDC_SG;
slist++;
}
- m->next = NULL;
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
/* Mark mempool object as "put" since it is freed by NIX
*/
if (!prefree)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+#else
+ RTE_SET_USED(cookie);
#endif
m = m_next;
} while (nb_segs);
@@ -1997,13 +2170,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
uint64x2_t sgdesc01_w0, sgdesc23_w0;
uint64x2_t sgdesc01_w1, sgdesc23_w1;
struct cn10k_eth_txq *txq = tx_queue;
- uint64x2_t xmask01_w0, xmask23_w0;
- uint64x2_t xmask01_w1, xmask23_w1;
rte_iova_t io_addr = txq->io_addr;
uint8_t lnum, shift = 0, loff = 0;
uintptr_t laddr = txq->lmt_base;
uint8_t c_lnum, c_shft, c_loff;
- struct nix_send_hdr_s send_hdr;
uint64x2_t ltypes01, ltypes23;
uint64x2_t xtmp128, ytmp128;
uint64x2_t xmask01, xmask23;
@@ -2153,7 +2323,7 @@ again:
}
/* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
senddesc01_w0 =
- vbicq_u64(senddesc01_w0, vdupq_n_u64(0xFFFFFFFF));
+ vbicq_u64(senddesc01_w0, vdupq_n_u64(0x800FFFFFFFF));
sgdesc01_w0 = vbicq_u64(sgdesc01_w0, vdupq_n_u64(0xFFFFFFFF));
senddesc23_w0 = senddesc01_w0;
@@ -2859,73 +3029,8 @@ again:
!(flags & NIX_TX_MULTI_SEG_F) &&
!(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
/* Set don't free bit if reference count > 1 */
- xmask01_w0 = vdupq_n_u64(0);
- xmask01_w1 = vdupq_n_u64(0);
- xmask23_w0 = xmask01_w0;
- xmask23_w1 = xmask01_w1;
-
- /* Move mbufs to iova */
- mbuf0 = (uint64_t *)tx_pkts[0];
- mbuf1 = (uint64_t *)tx_pkts[1];
- mbuf2 = (uint64_t *)tx_pkts[2];
- mbuf3 = (uint64_t *)tx_pkts[3];
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn10k_nix_prefree_seg((struct rte_mbuf *)mbuf0, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask01_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask01_w0, 0);
- xmask01_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask01_w1, 0);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf0)->pool,
- (void **)&mbuf0, 1, 0);
- }
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn10k_nix_prefree_seg((struct rte_mbuf *)mbuf1, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask01_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask01_w0, 1);
- xmask01_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask01_w1, 1);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf1)->pool,
- (void **)&mbuf1, 1, 0);
- }
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn10k_nix_prefree_seg((struct rte_mbuf *)mbuf2, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask23_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask23_w0, 0);
- xmask23_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask23_w1, 0);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf2)->pool,
- (void **)&mbuf2, 1, 0);
- }
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn10k_nix_prefree_seg((struct rte_mbuf *)mbuf3, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask23_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask23_w0, 1);
- xmask23_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask23_w1, 1);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf3)->pool,
- (void **)&mbuf3, 1, 0);
- }
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01_w0);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23_w0);
- senddesc01_w1 = vorrq_u64(senddesc01_w1, xmask01_w1);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, xmask23_w1);
+ cn10k_nix_prefree_seg_vec(tx_pkts, txq, &senddesc01_w0, &senddesc23_w0,
+ &senddesc01_w1, &senddesc23_w1);
} else if (!(flags & NIX_TX_MULTI_SEG_F) &&
!(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
/* Move mbufs to iova */
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index bae4dda5e2..b92b978a27 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -347,7 +347,13 @@ cn9k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
struct roc_nix_sq *sq = &dev->sqs[qidx];
do {
handle_tx_completion_pkts(txq, 0);
+ /* Check if SQ is empty */
roc_nix_sq_head_tail_get(nix, sq->qid, &head, &tail);
+ if (head != tail)
+ continue;
+
+ /* Check if completion CQ is empty */
+ roc_nix_cq_head_tail_get(nix, sq->cqid, &head, &tail);
} while (head != tail);
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index 9e0a3c5bb2..6ae0db62ca 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -169,6 +169,7 @@ handle_tx_completion_pkts(struct cn9k_eth_txq *txq, uint8_t mt_safe)
m = m_next;
}
rte_pktmbuf_free_seg(m);
+ txq->tx_compl.ptr[tx_compl_s0->sqe_id] = NULL;
head++;
head &= qmask;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index fba4bb4215..3596651cc2 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -83,9 +83,10 @@ cn9k_nix_tx_skeleton(struct cn9k_eth_txq *txq, uint64_t *cmd,
}
static __rte_always_inline uint64_t
-cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq,
- struct nix_send_hdr_s *send_hdr)
+cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq, struct nix_send_hdr_s *send_hdr,
+ uint64_t *aura)
{
+ struct rte_mbuf *prev;
uint32_t sqe_id;
if (RTE_MBUF_HAS_EXTBUF(m)) {
@@ -94,7 +95,10 @@ cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq,
return 1;
}
if (send_hdr->w0.pnc) {
- txq->tx_compl.ptr[send_hdr->w1.sqe_id]->next = m;
+ sqe_id = send_hdr->w1.sqe_id;
+ prev = txq->tx_compl.ptr[sqe_id];
+ m->next = prev;
+ txq->tx_compl.ptr[sqe_id] = m;
} else {
sqe_id = __atomic_fetch_add(&txq->tx_compl.sqe_id, 1, __ATOMIC_RELAXED);
send_hdr->w0.pnc = 1;
@@ -104,10 +108,155 @@ cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq,
}
return 1;
} else {
- return cnxk_nix_prefree_seg(m);
+ return cnxk_nix_prefree_seg(m, aura);
}
}
+#if defined(RTE_ARCH_ARM64)
+/* Only called for first segments of single segmented mbufs */
+static __rte_always_inline void
+cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq,
+ uint64x2_t *senddesc01_w0, uint64x2_t *senddesc23_w0,
+ uint64x2_t *senddesc01_w1, uint64x2_t *senddesc23_w1)
+{
+ struct rte_mbuf **tx_compl_ptr = txq->tx_compl.ptr;
+ uint32_t nb_desc_mask = txq->tx_compl.nb_desc_mask;
+ bool tx_compl_ena = txq->tx_compl.ena;
+ struct rte_mbuf *m0, *m1, *m2, *m3;
+ struct rte_mbuf *cookie;
+ uint64_t w0, w1, aura;
+ uint64_t sqe_id;
+
+ m0 = mbufs[0];
+ m1 = mbufs[1];
+ m2 = mbufs[2];
+ m3 = mbufs[3];
+
+ /* mbuf 0 */
+ w0 = vgetq_lane_u64(*senddesc01_w0, 0);
+ if (RTE_MBUF_HAS_EXTBUF(m0)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc01_w1, 0);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m0);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m0;
+ *senddesc01_w1 = vsetq_lane_u64(w1, *senddesc01_w1, 0);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m0) ? m0 : rte_mbuf_from_indirect(m0);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m0, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 0);
+
+ /* mbuf1 */
+ w0 = vgetq_lane_u64(*senddesc01_w0, 1);
+ if (RTE_MBUF_HAS_EXTBUF(m1)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc01_w1, 1);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m1);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m1;
+ *senddesc01_w1 = vsetq_lane_u64(w1, *senddesc01_w1, 1);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m1) ? m1 : rte_mbuf_from_indirect(m1);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m1, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 1);
+
+ /* mbuf 2 */
+ w0 = vgetq_lane_u64(*senddesc23_w0, 0);
+ if (RTE_MBUF_HAS_EXTBUF(m2)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc23_w1, 0);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m2);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m2;
+ *senddesc23_w1 = vsetq_lane_u64(w1, *senddesc23_w1, 0);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m2) ? m2 : rte_mbuf_from_indirect(m2);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m2, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 0);
+
+ /* mbuf3 */
+ w0 = vgetq_lane_u64(*senddesc23_w0, 1);
+ if (RTE_MBUF_HAS_EXTBUF(m3)) {
+ w0 |= BIT_ULL(19);
+ w1 = vgetq_lane_u64(*senddesc23_w1, 1);
+ w1 &= ~0xFFFF000000000000UL;
+ if (unlikely(!tx_compl_ena)) {
+ rte_pktmbuf_free_seg(m3);
+ } else {
+ sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
+ rte_memory_order_relaxed);
+ sqe_id = sqe_id & nb_desc_mask;
+ /* Set PNC */
+ w0 |= BIT_ULL(43);
+ w1 |= sqe_id << 48;
+ tx_compl_ptr[sqe_id] = m3;
+ *senddesc23_w1 = vsetq_lane_u64(w1, *senddesc23_w1, 1);
+ }
+ } else {
+ cookie = RTE_MBUF_DIRECT(m3) ? m3 : rte_mbuf_from_indirect(m3);
+ aura = (w0 >> 20) & 0xFFFFF;
+ w0 &= ~0xFFFFF00000UL;
+ w0 |= cnxk_nix_prefree_seg(m3, &aura) << 19;
+ w0 |= aura << 20;
+
+ if ((w0 & BIT_ULL(19)) == 0)
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+ }
+ *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 1);
+#ifndef RTE_LIBRTE_MEMPOOL_DEBUG
+ RTE_SET_USED(cookie);
+#endif
+}
+#endif
+
static __rte_always_inline void
cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
{
@@ -191,6 +340,8 @@ cn9k_nix_xmit_prepare(struct cn9k_eth_txq *txq,
ol_flags = m->ol_flags;
w1.u = 0;
}
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+ send_hdr->w0.pnc = 0;
if (!(flags & NIX_TX_MULTI_SEG_F))
send_hdr->w0.total = m->data_len;
@@ -345,23 +496,33 @@ cn9k_nix_xmit_prepare(struct cn9k_eth_txq *txq,
send_hdr->w1.u = w1.u;
if (!(flags & NIX_TX_MULTI_SEG_F)) {
+ struct rte_mbuf *cookie;
+
sg->seg1_size = m->data_len;
*(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ uint64_t aura;
/* DF bit = 1 if refcount of current mbuf or parent mbuf
* is greater than 1
* DF bit = 0 otherwise
*/
- send_hdr->w0.df = cn9k_nix_prefree_seg(m, txq, send_hdr);
+ aura = send_hdr->w0.aura;
+ send_hdr->w0.df = cn9k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ send_hdr->w0.aura = aura;
/* Ensuring mbuf fields which got updated in
* cnxk_nix_prefree_seg are written before LMTST.
*/
rte_io_wmb();
}
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
/* Mark mempool object as "put" since it is freed by NIX */
if (!send_hdr->w0.df)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
+#else
+ RTE_SET_USED(cookie);
+#endif
} else {
sg->seg1_size = m->data_len;
*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
@@ -443,6 +604,8 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
{
struct nix_send_hdr_s *send_hdr;
+ uint64_t prefree = 0, aura;
+ struct rte_mbuf *cookie;
union nix_send_sg_s *sg;
struct rte_mbuf *m_next;
uint64_t *slist, sg_u;
@@ -467,17 +630,23 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
m_next = m->next;
slist = &cmd[3 + off + 1];
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr) << 55);
+ aura = send_hdr->w0.aura;
+ prefree = (cn9k_nix_prefree_seg(m, txq, send_hdr, &aura) << 55);
+ send_hdr->w0.aura = aura;
+ sg_u |= prefree;
rte_io_wmb();
}
/* Mark mempool object as "put" since it is freed by NIX */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (!(sg_u & (1ULL << 55)))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
rte_io_wmb();
+#else
+ RTE_SET_USED(cookie);
#endif
m = m_next;
if (!m)
@@ -488,16 +657,17 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
m_next = m->next;
sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
*slist = rte_mbuf_data_iova(m);
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr) << (i + 55));
+ sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, NULL) << (i + 55));
/* Commit changes to mbuf */
rte_io_wmb();
}
/* Mark mempool object as "put" since it is freed by NIX */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (!(sg_u & (1ULL << (i + 55))))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
rte_io_wmb();
#endif
slist++;
@@ -709,8 +879,8 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
struct nix_send_hdr_s *send_hdr,
union nix_send_sg_s *sg, const uint32_t flags)
{
- struct rte_mbuf *m_next;
- uint64_t *slist, sg_u;
+ struct rte_mbuf *m_next, *cookie;
+ uint64_t *slist, sg_u, aura;
uint16_t nb_segs;
uint64_t segdw;
int i = 1;
@@ -727,13 +897,19 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
m_next = m->next;
/* Set invert df if buffer is not to be freed by H/W */
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr) << 55);
- /* Mark mempool object as "put" since it is freed by NIX */
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
+ aura = send_hdr->w0.aura;
+ sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, &aura) << 55);
+ send_hdr->w0.aura = aura;
+ }
+ /* Mark mempool object as "put" since it is freed by NIX */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (!(sg_u & (1ULL << 55)))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
rte_io_wmb();
+#else
+ RTE_SET_USED(cookie);
#endif
m = m_next;
@@ -742,14 +918,15 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
m_next = m->next;
sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
*slist = rte_mbuf_data_iova(m);
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr) << (i + 55));
+ sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, &aura) << (i + 55));
/* Mark mempool object as "put" since it is freed by NIX
*/
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (!(sg_u & (1ULL << (i + 55))))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
rte_io_wmb();
#endif
slist++;
@@ -789,15 +966,20 @@ cn9k_nix_prepare_mseg_vec(struct cn9k_eth_txq *txq,
uint64x2_t *cmd1, const uint32_t flags)
{
struct nix_send_hdr_s send_hdr;
+ struct rte_mbuf *cookie;
union nix_send_sg_s sg;
+ uint64_t aura;
uint8_t ret;
if (m->nb_segs == 1) {
+ cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
send_hdr.w0.u = vgetq_lane_u64(cmd0[0], 0);
send_hdr.w1.u = vgetq_lane_u64(cmd0[0], 1);
sg.u = vgetq_lane_u64(cmd1[0], 0);
- sg.u |= (cn9k_nix_prefree_seg(m, txq, &send_hdr) << 55);
+ aura = send_hdr.w0.aura;
+ sg.u |= (cn9k_nix_prefree_seg(m, txq, &send_hdr, &aura) << 55);
+ send_hdr.w0.aura = aura;
cmd1[0] = vsetq_lane_u64(sg.u, cmd1[0], 0);
cmd0[0] = vsetq_lane_u64(send_hdr.w0.u, cmd0[0], 0);
cmd0[0] = vsetq_lane_u64(send_hdr.w1.u, cmd0[0], 1);
@@ -806,8 +988,10 @@ cn9k_nix_prepare_mseg_vec(struct cn9k_eth_txq *txq,
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
sg.u = vgetq_lane_u64(cmd1[0], 0);
if (!(sg.u & (1ULL << 55)))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0);
rte_io_wmb();
+#else
+ RTE_SET_USED(cookie);
#endif
return 2 + !!(flags & NIX_TX_NEED_EXT_HDR) +
!!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
@@ -962,10 +1146,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64x2_t sgdesc01_w1, sgdesc23_w1;
struct cn9k_eth_txq *txq = tx_queue;
uint64_t *lmt_addr = txq->lmt_addr;
- uint64x2_t xmask01_w0, xmask23_w0;
- uint64x2_t xmask01_w1, xmask23_w1;
rte_iova_t io_addr = txq->io_addr;
- struct nix_send_hdr_s send_hdr;
uint64x2_t ltypes01, ltypes23;
uint64x2_t xtmp128, ytmp128;
uint64x2_t xmask01, xmask23;
@@ -1028,7 +1209,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
/* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
senddesc01_w0 =
- vbicq_u64(senddesc01_w0, vdupq_n_u64(0xFFFFFFFF));
+ vbicq_u64(senddesc01_w0, vdupq_n_u64(0x800FFFFFFFF));
sgdesc01_w0 = vbicq_u64(sgdesc01_w0, vdupq_n_u64(0xFFFFFFFF));
senddesc23_w0 = senddesc01_w0;
@@ -1732,74 +1913,8 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
!(flags & NIX_TX_MULTI_SEG_F)) {
/* Set don't free bit if reference count > 1 */
- xmask01_w0 = vdupq_n_u64(0);
- xmask01_w1 = vdupq_n_u64(0);
- xmask23_w0 = xmask01_w0;
- xmask23_w1 = xmask01_w1;
-
- /* Move mbufs to iova */
- mbuf0 = (uint64_t *)tx_pkts[0];
- mbuf1 = (uint64_t *)tx_pkts[1];
- mbuf2 = (uint64_t *)tx_pkts[2];
- mbuf3 = (uint64_t *)tx_pkts[3];
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn9k_nix_prefree_seg((struct rte_mbuf *)mbuf0, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask01_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask01_w0, 0);
- xmask01_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask01_w1, 0);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf0)->pool,
- (void **)&mbuf0, 1, 0);
- }
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn9k_nix_prefree_seg((struct rte_mbuf *)mbuf1, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask01_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask01_w0, 1);
- xmask01_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask01_w1, 1);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf1)->pool,
- (void **)&mbuf1, 1, 0);
- }
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn9k_nix_prefree_seg((struct rte_mbuf *)mbuf2, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask23_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask23_w0, 0);
- xmask23_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask23_w1, 0);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf2)->pool,
- (void **)&mbuf2, 1, 0);
- }
-
- send_hdr.w0.u = 0;
- send_hdr.w1.u = 0;
-
- if (cn9k_nix_prefree_seg((struct rte_mbuf *)mbuf3, txq, &send_hdr)) {
- send_hdr.w0.df = 1;
- xmask23_w0 = vsetq_lane_u64(send_hdr.w0.u, xmask23_w0, 1);
- xmask23_w1 = vsetq_lane_u64(send_hdr.w1.u, xmask23_w1, 1);
- } else {
- RTE_MEMPOOL_CHECK_COOKIES(
- ((struct rte_mbuf *)mbuf3)->pool,
- (void **)&mbuf3, 1, 0);
- }
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01_w0);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23_w0);
- senddesc01_w1 = vorrq_u64(senddesc01_w1, xmask01_w1);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, xmask23_w1);
-
+ cn9k_nix_prefree_seg_vec(tx_pkts, txq, &senddesc01_w0, &senddesc23_w0,
+ &senddesc01_w1, &senddesc23_w1);
/* Ensuring mbuf fields which got updated in
* cnxk_nix_prefree_seg are written before LMTST.
*/
diff --git a/drivers/net/cnxk/cnxk_ethdev_dp.h b/drivers/net/cnxk/cnxk_ethdev_dp.h
index c1f99a2616..67f40b8e25 100644
--- a/drivers/net/cnxk/cnxk_ethdev_dp.h
+++ b/drivers/net/cnxk/cnxk_ethdev_dp.h
@@ -84,7 +84,7 @@ struct cnxk_timesync_info {
/* Inlines */
static __rte_always_inline uint64_t
-cnxk_pktmbuf_detach(struct rte_mbuf *m)
+cnxk_pktmbuf_detach(struct rte_mbuf *m, uint64_t *aura)
{
struct rte_mempool *mp = m->pool;
uint32_t mbuf_size, buf_len;
@@ -94,6 +94,8 @@ cnxk_pktmbuf_detach(struct rte_mbuf *m)
/* Update refcount of direct mbuf */
md = rte_mbuf_from_indirect(m);
+ if (aura)
+ *aura = roc_npa_aura_handle_to_aura(md->pool->pool_id);
refcount = rte_mbuf_refcnt_update(md, -1);
priv_size = rte_pktmbuf_priv_size(mp);
@@ -126,18 +128,18 @@ cnxk_pktmbuf_detach(struct rte_mbuf *m)
}
static __rte_always_inline uint64_t
-cnxk_nix_prefree_seg(struct rte_mbuf *m)
+cnxk_nix_prefree_seg(struct rte_mbuf *m, uint64_t *aura)
{
if (likely(rte_mbuf_refcnt_read(m) == 1)) {
if (!RTE_MBUF_DIRECT(m))
- return cnxk_pktmbuf_detach(m);
+ return cnxk_pktmbuf_detach(m, aura);
m->next = NULL;
m->nb_segs = 1;
return 0;
} else if (rte_mbuf_refcnt_update(m, -1) == 0) {
if (!RTE_MBUF_DIRECT(m))
- return cnxk_pktmbuf_detach(m);
+ return cnxk_pktmbuf_detach(m, aura);
rte_mbuf_refcnt_set(m, 1);
m->next = NULL;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.201540659 +0800
+++ 0036-net-cnxk-fix-indirect-mbuf-handling-in-Tx.patch 2024-04-13 20:43:04.947753997 +0800
@@ -1 +1 @@
-From 3232c95d2c361bdf5509cb9e9d0b9820398c1335 Mon Sep 17 00:00:00 2001
+From 0e5159a2235eccd26418d422868a9301d0aaf741 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3232c95d2c361bdf5509cb9e9d0b9820398c1335 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 78d1dca3c1..05d6d3b53f 100644
+index 4a4e97287c..29b7f2ba5e 100644
@@ -46 +48 @@
-index 2143df1a7e..492a07a2f7 100644
+index aeffc4ac92..9f33d0192e 100644
@@ -58 +60 @@
-index fcd19be77e..9647f4259e 100644
+index 467f0ccc65..025eff2913 100644
@@ -61 +63 @@
-@@ -735,8 +735,9 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+@@ -786,8 +786,9 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
@@ -72 +74 @@
-@@ -745,7 +746,10 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
+@@ -796,7 +797,10 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
@@ -84 +86 @@
-@@ -755,10 +759,155 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
+@@ -806,10 +810,155 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
@@ -241 +243 @@
-@@ -838,6 +987,9 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
+@@ -889,6 +1038,9 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
@@ -251 +253 @@
-@@ -998,19 +1150,30 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
+@@ -1049,19 +1201,30 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
@@ -284 +286 @@
-@@ -1084,6 +1247,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1135,6 +1298,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -292 +294 @@
-@@ -1112,21 +1276,26 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1163,21 +1327,26 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -322 +324 @@
-@@ -1156,9 +1325,12 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1207,9 +1376,12 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -336 +338 @@
-@@ -1208,13 +1380,14 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1259,13 +1431,14 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -353 +355 @@
-@@ -1946,13 +2119,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
+@@ -1997,13 +2170,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
@@ -367 +369 @@
-@@ -2106,7 +2276,7 @@ again:
+@@ -2153,7 +2323,7 @@ again:
@@ -376 +378 @@
-@@ -2812,73 +2982,8 @@ again:
+@@ -2859,73 +3029,8 @@ again:
@@ -453 +455 @@
-index 67f21a9c7f..ea92b1dcb6 100644
+index bae4dda5e2..b92b978a27 100644
@@ -953 +955 @@
-index 56cfcb7fc6..119bb1836a 100644
+index c1f99a2616..67f40b8e25 100644
@@ -956 +958 @@
-@@ -92,7 +92,7 @@ struct cnxk_ethdev_inj_cfg {
+@@ -84,7 +84,7 @@ struct cnxk_timesync_info {
@@ -965 +967 @@
-@@ -102,6 +102,8 @@ cnxk_pktmbuf_detach(struct rte_mbuf *m)
+@@ -94,6 +94,8 @@ cnxk_pktmbuf_detach(struct rte_mbuf *m)
@@ -974 +976 @@
-@@ -134,18 +136,18 @@ cnxk_pktmbuf_detach(struct rte_mbuf *m)
+@@ -126,18 +128,18 @@ cnxk_pktmbuf_detach(struct rte_mbuf *m)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: add cookies check for multi-segment offload' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (34 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: fix indirect mbuf handling in Tx' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: fix mbox struct attributes' " Xueming Li
` (87 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rahul Bhansali; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e5450b2bba3066b7b34e29a4edae3450b9b7d46c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e5450b2bba3066b7b34e29a4edae3450b9b7d46c Mon Sep 17 00:00:00 2001
From: Rahul Bhansali <rbhansali@marvell.com>
Date: Mon, 26 Feb 2024 19:05:33 +0530
Subject: [PATCH] net/cnxk: add cookies check for multi-segment offload
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 26a6bda9dfd153192c4cfb73b3367398caa3afaa ]
Fix missing check cookies with multi-seg offload case
Fixes: 3626d5195d49 ("net/cnxk: add multi-segment Tx for CN10K")
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/net/cnxk/cn10k_tx.h | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 025eff2913..84d71d0818 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -1867,6 +1867,9 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
len -= dlen;
sg_u = sg_u | ((uint64_t)dlen);
+ /* Mark mempool object as "put" since it is freed by NIX */
+ RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+
nb_segs = m->nb_segs - 1;
m_next = m->next;
m->next = NULL;
@@ -1892,6 +1895,9 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
slist++;
}
m->next = NULL;
+ /* Mark mempool object as "put" since it is freed by NIX */
+ RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
+
m = m_next;
} while (nb_segs);
@@ -1915,8 +1921,11 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
union nix_send_hdr_w0_u sh;
union nix_send_sg_s sg;
- if (m->nb_segs == 1)
+ if (m->nb_segs == 1) {
+ /* Mark mempool object as "put" since it is freed by NIX */
+ RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
return;
+ }
sh.u = vgetq_lane_u64(cmd0[0], 0);
sg.u = vgetq_lane_u64(cmd1[0], 0);
@@ -1976,6 +1985,11 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
*data128 |= ((__uint128_t)7) << *shift;
*shift += 3;
+ /* Mark mempool object as "put" since it is freed by NIX */
+ RTE_MEMPOOL_CHECK_COOKIES(mbufs[0]->pool, (void **)&mbufs[0], 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(mbufs[1]->pool, (void **)&mbufs[1], 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(mbufs[2]->pool, (void **)&mbufs[2], 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(mbufs[3]->pool, (void **)&mbufs[3], 1, 0);
return 1;
}
}
@@ -1994,6 +2008,11 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
vst1q_u64(lmt_addr + 10, cmd2[j + 1]);
vst1q_u64(lmt_addr + 12, cmd1[j + 1]);
vst1q_u64(lmt_addr + 14, cmd3[j + 1]);
+
+ /* Mark mempool object as "put" since it is freed by NIX */
+ RTE_MEMPOOL_CHECK_COOKIES(mbufs[j]->pool, (void **)&mbufs[j], 1, 0);
+ RTE_MEMPOOL_CHECK_COOKIES(mbufs[j + 1]->pool,
+ (void **)&mbufs[j + 1], 1, 0);
} else if (flags & NIX_TX_NEED_EXT_HDR) {
/* EXT header take 3 each, space for 2 segs.*/
cn10k_nix_prepare_mseg_vec(mbufs[j],
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.234572916 +0800
+++ 0037-net-cnxk-add-cookies-check-for-multi-segment-offload.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 26a6bda9dfd153192c4cfb73b3367398caa3afaa Mon Sep 17 00:00:00 2001
+From e5450b2bba3066b7b34e29a4edae3450b9b7d46c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 26a6bda9dfd153192c4cfb73b3367398caa3afaa ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -17 +19 @@
-index 9647f4259e..91b7f15c02 100644
+index 025eff2913..84d71d0818 100644
@@ -20 +22 @@
-@@ -1816,6 +1816,9 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
+@@ -1867,6 +1867,9 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
@@ -30 +32 @@
-@@ -1841,6 +1844,9 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
+@@ -1892,6 +1895,9 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
@@ -40 +42 @@
-@@ -1864,8 +1870,11 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
+@@ -1915,8 +1921,11 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
@@ -53 +55 @@
-@@ -1925,6 +1934,11 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
+@@ -1976,6 +1985,11 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
@@ -65 +67 @@
-@@ -1943,6 +1957,11 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
+@@ -1994,6 +2008,11 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: fix mbox struct attributes' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (35 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: add cookies check for multi-segment offload' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: fix mbuf fields in multi-segment Tx' " Xueming Li
` (86 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a6bd2f39c150d439a56c95069041dc5883659eff
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a6bd2f39c150d439a56c95069041dc5883659eff Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Mon, 26 Feb 2024 19:05:34 +0530
Subject: [PATCH] common/cnxk: fix mbox struct attributes
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c9dca1c5e352008bda8d0edeab8fbcf328437282 ]
IO attribute is needed to mbox structs to avoid unaligned or pair
access causing by compiler optimization. Add them to structs
where it is missing.
Fixes: 503b82de2cbf ("common/cnxk: add mbox request and response definitions")
Fixes: ddf955d3917e ("common/cnxk: support CPT second pass")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_mbox.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
index 05434aec5a..7eaff0a0eb 100644
--- a/drivers/common/cnxk/roc_mbox.h
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -1420,12 +1420,12 @@ struct nix_cn10k_aq_enq_req {
struct nix_cn10k_aq_enq_rsp {
struct mbox_msghdr hdr;
union {
- struct nix_cn10k_rq_ctx_s rq;
- struct nix_cn10k_sq_ctx_s sq;
- struct nix_cq_ctx_s cq;
- struct nix_rsse_s rss;
- struct nix_rx_mce_s mce;
- struct nix_band_prof_s prof;
+ __io struct nix_cn10k_rq_ctx_s rq;
+ __io struct nix_cn10k_sq_ctx_s sq;
+ __io struct nix_cq_ctx_s cq;
+ __io struct nix_rsse_s rss;
+ __io struct nix_rx_mce_s mce;
+ __io struct nix_band_prof_s prof;
};
};
@@ -1661,11 +1661,11 @@ struct nix_rq_cpt_field_mask_cfg_req {
#define RQ_CTX_MASK_MAX 6
union {
uint64_t __io rq_ctx_word_set[RQ_CTX_MASK_MAX];
- struct nix_cn10k_rq_ctx_s rq_set;
+ __io struct nix_cn10k_rq_ctx_s rq_set;
};
union {
uint64_t __io rq_ctx_word_mask[RQ_CTX_MASK_MAX];
- struct nix_cn10k_rq_ctx_s rq_mask;
+ __io struct nix_cn10k_rq_ctx_s rq_mask;
};
struct nix_lf_rx_ipec_cfg1_req {
uint32_t __io spb_cpt_aura;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.258867084 +0800
+++ 0038-common-cnxk-fix-mbox-struct-attributes.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From c9dca1c5e352008bda8d0edeab8fbcf328437282 Mon Sep 17 00:00:00 2001
+From a6bd2f39c150d439a56c95069041dc5883659eff Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c9dca1c5e352008bda8d0edeab8fbcf328437282 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 4b4f48e372..d8a8494ac4 100644
+index 05434aec5a..7eaff0a0eb 100644
@@ -23 +25 @@
-@@ -1427,12 +1427,12 @@ struct nix_cn10k_aq_enq_req {
+@@ -1420,12 +1420,12 @@ struct nix_cn10k_aq_enq_req {
@@ -42 +44 @@
-@@ -1668,11 +1668,11 @@ struct nix_rq_cpt_field_mask_cfg_req {
+@@ -1661,11 +1661,11 @@ struct nix_rq_cpt_field_mask_cfg_req {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: fix mbuf fields in multi-segment Tx' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (36 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: fix mbox struct attributes' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: fix link config for SDP' " Xueming Li
` (85 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rahul Bhansali; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6f05d2d4615a20eac1313912aedf3dfcf4f8f98c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6f05d2d4615a20eac1313912aedf3dfcf4f8f98c Mon Sep 17 00:00:00 2001
From: Rahul Bhansali <rbhansali@marvell.com>
Date: Mon, 26 Feb 2024 19:05:36 +0530
Subject: [PATCH] net/cnxk: fix mbuf fields in multi-segment Tx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8ed5ca4dda858c991b27ad6ce5a5525e26a960c0 ]
Currently in debug mode when a buffer is allocated in SW,
nb_segs will have invalid values as it didn't come from driver
Rx path. Hence reset mbuf next and nb_segs fields in multi-seg Tx path.
Fixes: 3626d5195d49 ("net/cnxk: add multi-segment Tx for CN10K")
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/net/cnxk/cn10k_tx.h | 2 ++
drivers/net/cnxk/cn9k_tx.h | 20 ++++++++++++++++++++
2 files changed, 22 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 84d71d0818..cc480d24e8 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -1328,6 +1328,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
nb_segs = m->nb_segs - 1;
m_next = m->next;
m->next = NULL;
+ m->nb_segs = 1;
slist = &cmd[3 + off + 1];
cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
@@ -1873,6 +1874,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
nb_segs = m->nb_segs - 1;
m_next = m->next;
m->next = NULL;
+ m->nb_segs = 1;
m = m_next;
/* Fill mbuf segments */
do {
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 3596651cc2..94acbe64fa 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -647,6 +647,10 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
rte_io_wmb();
#else
RTE_SET_USED(cookie);
+#endif
+#ifdef RTE_ENABLE_ASSERT
+ m->next = NULL;
+ m->nb_segs = 1;
#endif
m = m_next;
if (!m)
@@ -683,6 +687,9 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
sg_u = sg->u;
slist++;
}
+#ifdef RTE_ENABLE_ASSERT
+ m->next = NULL;
+#endif
m = m_next;
} while (nb_segs);
@@ -696,6 +703,9 @@ done:
segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
send_hdr->w0.sizem1 = segdw - 1;
+#ifdef RTE_ENABLE_ASSERT
+ rte_io_wmb();
+#endif
return segdw;
}
@@ -912,6 +922,10 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
RTE_SET_USED(cookie);
#endif
+#ifdef RTE_ENABLE_ASSERT
+ m->next = NULL;
+ m->nb_segs = 1;
+#endif
m = m_next;
/* Fill mbuf segments */
do {
@@ -942,6 +956,9 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
sg_u = sg->u;
slist++;
}
+#ifdef RTE_ENABLE_ASSERT
+ m->next = NULL;
+#endif
m = m_next;
} while (nb_segs);
@@ -957,6 +974,9 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
!!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
send_hdr->w0.sizem1 = segdw - 1;
+#ifdef RTE_ENABLE_ASSERT
+ rte_io_wmb();
+#endif
return segdw;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.282742553 +0800
+++ 0039-net-cnxk-fix-mbuf-fields-in-multi-segment-Tx.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 8ed5ca4dda858c991b27ad6ce5a5525e26a960c0 Mon Sep 17 00:00:00 2001
+From 6f05d2d4615a20eac1313912aedf3dfcf4f8f98c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8ed5ca4dda858c991b27ad6ce5a5525e26a960c0 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 91b7f15c02..266c899a05 100644
+index 84d71d0818..cc480d24e8 100644
@@ -23 +25 @@
-@@ -1277,6 +1277,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1328,6 +1328,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -31 +33 @@
-@@ -1822,6 +1823,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
+@@ -1873,6 +1874,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: fix link config for SDP' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (37 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: fix mbuf fields in multi-segment Tx' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: remove dead code' " Xueming Li
` (84 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Harman Kalra; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9cb9b9c8a0508b9e9b3074611636760c6cd24cae
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9cb9b9c8a0508b9e9b3074611636760c6cd24cae Mon Sep 17 00:00:00 2001
From: Harman Kalra <hkalra@marvell.com>
Date: Wed, 28 Feb 2024 00:25:19 +0530
Subject: [PATCH] common/cnxk: fix link config for SDP
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 48054ca3842b0fc468e3e7931143dcb154b45921 ]
Link configure registers are invalid and should not be accessed
for SDP ports. But while on Txq release which does SQ flush calls
back pressure disable API which configures these link registers.
Fixes: 58debb813a8d ("common/cnxk: enable TM to listen on Rx pause frames")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/common/cnxk/roc_nix_tm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index ece88b5e99..9e5e614b3b 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -328,6 +328,9 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
uint8_t k = 0;
int rc = 0, i;
+ if (roc_nix_is_sdp(roc_nix))
+ return 0;
+
sq_s = nix->sqs[sq];
if (!sq_s)
return -ENOENT;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.309170919 +0800
+++ 0040-common-cnxk-fix-link-config-for-SDP.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 48054ca3842b0fc468e3e7931143dcb154b45921 Mon Sep 17 00:00:00 2001
+From 9cb9b9c8a0508b9e9b3074611636760c6cd24cae Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 48054ca3842b0fc468e3e7931143dcb154b45921 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 6a61e448a1..4e6a28f827 100644
+index ece88b5e99..9e5e614b3b 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: remove dead code' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (38 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: fix link config for SDP' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'common/cnxk: fix possible out-of-bounds access' " Xueming Li
` (83 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Satheesh Paul; +Cc: Nithin Dabilpuram, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9172348240ce8568603366561e08f6d09b0f7398
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9172348240ce8568603366561e08f6d09b0f7398 Mon Sep 17 00:00:00 2001
From: Satheesh Paul <psatheesh@marvell.com>
Date: Fri, 1 Mar 2024 09:05:33 +0530
Subject: [PATCH] common/cnxk: remove dead code
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1ded7ef41ac888eb5e804d3337be687e5ebd324d ]
Removed dead code reported by Coverity.
Coverity issue: 380992
Fixes: da1ec39060b2 ("common/cnxk: delay inline device RQ enable to dev start")
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Reviewed-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index a7bae8a51c..bc9cc2f429 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -620,8 +620,7 @@ roc_nix_reassembly_configure(uint32_t max_wait_time, uint16_t max_frags)
return -EFAULT;
PLT_SET_USED(max_frags);
- if (idev == NULL)
- return -ENOTSUP;
+
roc_cpt = idev->cpt;
if (!roc_cpt) {
plt_err("Cannot support inline inbound, cryptodev not probed");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.332484788 +0800
+++ 0041-common-cnxk-remove-dead-code.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 1ded7ef41ac888eb5e804d3337be687e5ebd324d Mon Sep 17 00:00:00 2001
+From 9172348240ce8568603366561e08f6d09b0f7398 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1ded7ef41ac888eb5e804d3337be687e5ebd324d ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index a205c658e9..7dbeae5017 100644
+index a7bae8a51c..bc9cc2f429 100644
@@ -22 +24 @@
-@@ -677,8 +677,7 @@ roc_nix_reassembly_configure(uint32_t max_wait_time, uint16_t max_frags)
+@@ -620,8 +620,7 @@ roc_nix_reassembly_configure(uint32_t max_wait_time, uint16_t max_frags)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'common/cnxk: fix possible out-of-bounds access' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (39 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: remove dead code' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/cnxk: improve Tx performance for SW mbuf free' " Xueming Li
` (82 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Satheesh Paul; +Cc: Harman Kalra, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=37256aa1bf8f366524d76e4ed56205c49e9b4892
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 37256aa1bf8f366524d76e4ed56205c49e9b4892 Mon Sep 17 00:00:00 2001
From: Satheesh Paul <psatheesh@marvell.com>
Date: Fri, 1 Mar 2024 09:05:34 +0530
Subject: [PATCH] common/cnxk: fix possible out-of-bounds access
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9a92937cf0c836b7f2b5e303523279ddf9473a35 ]
The subtraction expression in mbox_memcpy() can wrap around
causing an out-of-bounds access. Added a check on 'size' to
fix this.
Coverity issue: 384431, 384439
Fixes: 585bb3e538f9 ("common/cnxk: add VF support to base device class")
Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
Reviewed-by: Harman Kalra <hkalra@marvell.com>
---
drivers/common/cnxk/roc_dev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 084343c3b4..14aff233d5 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -502,6 +502,8 @@ pf_vf_mbox_send_up_msg(struct dev *dev, void *rec_msg)
size_t size;
size = PLT_ALIGN(mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN);
+ if (size < sizeof(struct mbox_msghdr))
+ return;
/* Send UP message to all VF's */
for (vf = 0; vf < vf_mbox->ndevs; vf++) {
/* VF active */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.358261355 +0800
+++ 0042-common-cnxk-fix-possible-out-of-bounds-access.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 9a92937cf0c836b7f2b5e303523279ddf9473a35 Mon Sep 17 00:00:00 2001
+From 37256aa1bf8f366524d76e4ed56205c49e9b4892 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9a92937cf0c836b7f2b5e303523279ddf9473a35 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/cnxk: improve Tx performance for SW mbuf free' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (40 preceding siblings ...)
2024-04-13 12:48 ` patch 'common/cnxk: fix possible out-of-bounds access' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'doc: fix aging poll frequency option in cnxk guide' " Xueming Li
` (81 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Rahul Bhansali; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=630dbc8a928ba12e93c534df9d7dfdd6ad4af371
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 630dbc8a928ba12e93c534df9d7dfdd6ad4af371 Mon Sep 17 00:00:00 2001
From: Rahul Bhansali <rbhansali@marvell.com>
Date: Fri, 1 Mar 2024 08:46:45 +0530
Subject: [PATCH] net/cnxk: improve Tx performance for SW mbuf free
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f3d7cf8a4c7eedbf2bdfc19370d49bd2557717e6 ]
Performance improvement is done for Tx fastpath flag MBUF_NOFF when
tx_compl_ena is false and mbuf has an external buffer.
In such case, Instead of individual external mbuf free before LMTST,
a chain of external mbuf will be created and free all after LMTST.
This not only improve the performance but also fixes SQ corruption.
CN10k performance improvement is ~14%.
CN9k performance improvement is ~20%.
Fixes: 51a636528515 ("net/cnxk: fix crash during Tx completion")
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/event/cnxk/cn10k_tx_worker.h | 8 ++-
drivers/event/cnxk/cn9k_worker.h | 9 ++-
drivers/net/cnxk/cn10k_tx.h | 97 +++++++++++++++++++---------
drivers/net/cnxk/cn9k_tx.h | 88 ++++++++++++++++---------
4 files changed, 135 insertions(+), 67 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_tx_worker.h b/drivers/event/cnxk/cn10k_tx_worker.h
index 53e0dde20c..256237b895 100644
--- a/drivers/event/cnxk/cn10k_tx_worker.h
+++ b/drivers/event/cnxk/cn10k_tx_worker.h
@@ -70,6 +70,7 @@ cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd,
const uint64_t *txq_data, const uint32_t flags)
{
uint8_t lnum = 0, loff = 0, shft = 0;
+ struct rte_mbuf *extm = NULL;
struct cn10k_eth_txq *txq;
uintptr_t laddr;
uint16_t segdw;
@@ -90,7 +91,7 @@ cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd,
if (flags & NIX_TX_OFFLOAD_TSO_F)
cn10k_nix_xmit_prepare_tso(m, flags);
- cn10k_nix_xmit_prepare(txq, m, cmd, flags, txq->lso_tun_fmt, &sec,
+ cn10k_nix_xmit_prepare(txq, m, &extm, cmd, flags, txq->lso_tun_fmt, &sec,
txq->mark_flag, txq->mark_fmt);
laddr = lmt_addr;
@@ -105,7 +106,7 @@ cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd,
cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
if (flags & NIX_TX_MULTI_SEG_F)
- segdw = cn10k_nix_prepare_mseg(txq, m, (uint64_t *)laddr, flags);
+ segdw = cn10k_nix_prepare_mseg(txq, m, &extm, (uint64_t *)laddr, flags);
else
segdw = cn10k_nix_tx_ext_subs(flags) + 2;
@@ -127,6 +128,9 @@ cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd,
/* Memory barrier to make sure lmtst store completes */
rte_io_wmb();
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena)
+ cn10k_nix_free_extmbuf(extm);
+
return 1;
}
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 0451157812..107265d54b 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -746,7 +746,7 @@ static __rte_always_inline uint16_t
cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
uint64_t *txq_data, const uint32_t flags)
{
- struct rte_mbuf *m = ev->mbuf;
+ struct rte_mbuf *m = ev->mbuf, *extm = NULL;
struct cn9k_eth_txq *txq;
/* Perform header writes before barrier for TSO */
@@ -767,7 +767,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
if (cn9k_sso_sq_depth(txq) <= 0)
return 0;
cn9k_nix_tx_skeleton(txq, cmd, flags, 0);
- cn9k_nix_xmit_prepare(txq, m, cmd, flags, txq->lso_tun_fmt, txq->mark_flag,
+ cn9k_nix_xmit_prepare(txq, m, &extm, cmd, flags, txq->lso_tun_fmt, txq->mark_flag,
txq->mark_fmt);
if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
@@ -789,7 +789,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
}
if (flags & NIX_TX_MULTI_SEG_F) {
- const uint16_t segdw = cn9k_nix_prepare_mseg(txq, m, cmd, flags);
+ const uint16_t segdw = cn9k_nix_prepare_mseg(txq, m, &extm, cmd, flags);
cn9k_nix_xmit_prepare_tstamp(txq, cmd, m->ol_flags, segdw,
flags);
if (!CNXK_TT_FROM_EVENT(ev->event)) {
@@ -819,6 +819,9 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
}
done:
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena)
+ cn9k_nix_free_extmbuf(extm);
+
return 1;
}
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index cc480d24e8..5dff578ba4 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -784,8 +784,19 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
}
#endif
+static inline void
+cn10k_nix_free_extmbuf(struct rte_mbuf *m)
+{
+ struct rte_mbuf *m_next;
+ while (m != NULL) {
+ m_next = m->next;
+ rte_pktmbuf_free_seg(m);
+ m = m_next;
+ }
+}
+
static __rte_always_inline uint64_t
-cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
+cn10k_nix_prefree_seg(struct rte_mbuf *m, struct rte_mbuf **extm, struct cn10k_eth_txq *txq,
struct nix_send_hdr_s *send_hdr, uint64_t *aura)
{
struct rte_mbuf *prev = NULL;
@@ -793,7 +804,8 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
if (RTE_MBUF_HAS_EXTBUF(m)) {
if (unlikely(txq->tx_compl.ena == 0)) {
- rte_pktmbuf_free_seg(m);
+ m->next = *extm;
+ *extm = m;
return 1;
}
if (send_hdr->w0.pnc) {
@@ -817,7 +829,8 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
#if defined(RTE_ARCH_ARM64)
/* Only called for first segments of single segmented mbufs */
static __rte_always_inline void
-cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
+cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct rte_mbuf **extm,
+ struct cn10k_eth_txq *txq,
uint64x2_t *senddesc01_w0, uint64x2_t *senddesc23_w0,
uint64x2_t *senddesc01_w1, uint64x2_t *senddesc23_w1)
{
@@ -841,7 +854,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc01_w1, 0);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m0);
+ m0->next = *extm;
+ *extm = m0;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -871,7 +885,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc01_w1, 1);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m1);
+ m1->next = *extm;
+ *extm = m1;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -901,7 +916,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc23_w1, 0);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m2);
+ m2->next = *extm;
+ *extm = m2;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -931,7 +947,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc23_w1, 1);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m3);
+ m3->next = *extm;
+ *extm = m3;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -1013,9 +1030,9 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
static __rte_always_inline void
cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
- const uint64_t lso_tun_fmt, bool *sec, uint8_t mark_flag,
- uint64_t mark_fmt)
+ struct rte_mbuf *m, struct rte_mbuf **extm, uint64_t *cmd,
+ const uint16_t flags, const uint64_t lso_tun_fmt, bool *sec,
+ uint8_t mark_flag, uint64_t mark_fmt)
{
uint8_t mark_off = 0, mark_vlan = 0, markptr = 0;
struct nix_send_ext_s *send_hdr_ext;
@@ -1215,7 +1232,7 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
* DF bit = 0 otherwise
*/
aura = send_hdr->w0.aura;
- send_hdr->w0.df = cn10k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ send_hdr->w0.df = cn10k_nix_prefree_seg(m, extm, txq, send_hdr, &aura);
send_hdr->w0.aura = aura;
}
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -1291,8 +1308,8 @@ cn10k_nix_xmit_prepare_tstamp(struct cn10k_eth_txq *txq, uintptr_t lmt_addr,
}
static __rte_always_inline uint16_t
-cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq, struct rte_mbuf *m, struct rte_mbuf **extm,
+ uint64_t *cmd, const uint16_t flags)
{
uint64_t prefree = 0, aura0, aura, nb_segs, segdw;
struct nix_send_hdr_s *send_hdr;
@@ -1335,7 +1352,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
aura = send_hdr->w0.aura;
- prefree = cn10k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ prefree = cn10k_nix_prefree_seg(m, extm, txq, send_hdr, &aura);
send_hdr->w0.aura = aura;
l_sg.i1 = prefree;
}
@@ -1382,7 +1399,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
- prefree = cn10k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ prefree = cn10k_nix_prefree_seg(m, extm, txq, send_hdr, &aura);
is_sg2 = aura != aura0 && !prefree;
}
@@ -1476,6 +1493,7 @@ cn10k_nix_xmit_pkts(void *tx_queue, uint64_t *ws, struct rte_mbuf **tx_pkts,
uint8_t lnum, c_lnum, c_shft, c_loff;
uintptr_t pa, lbase = txq->lmt_base;
uint16_t lmt_id, burst, left, i;
+ struct rte_mbuf *extm = NULL;
uintptr_t c_lbase = lbase;
uint64_t lso_tun_fmt = 0;
uint64_t mark_fmt = 0;
@@ -1530,7 +1548,7 @@ again:
if (flags & NIX_TX_OFFLOAD_TSO_F)
cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
- cn10k_nix_xmit_prepare(txq, tx_pkts[i], cmd, flags, lso_tun_fmt,
+ cn10k_nix_xmit_prepare(txq, tx_pkts[i], &extm, cmd, flags, lso_tun_fmt,
&sec, mark_flag, mark_fmt);
laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
@@ -1605,6 +1623,11 @@ again:
}
rte_io_wmb();
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena) {
+ cn10k_nix_free_extmbuf(extm);
+ extm = NULL;
+ }
+
if (left)
goto again;
@@ -1620,6 +1643,7 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, uint64_t *ws,
uintptr_t pa0, pa1, lbase = txq->lmt_base;
const rte_iova_t io_addr = txq->io_addr;
uint16_t segdw, lmt_id, burst, left, i;
+ struct rte_mbuf *extm = NULL;
uint8_t lnum, c_lnum, c_loff;
uintptr_t c_lbase = lbase;
uint64_t lso_tun_fmt = 0;
@@ -1681,7 +1705,7 @@ again:
if (flags & NIX_TX_OFFLOAD_TSO_F)
cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
- cn10k_nix_xmit_prepare(txq, tx_pkts[i], cmd, flags, lso_tun_fmt,
+ cn10k_nix_xmit_prepare(txq, tx_pkts[i], &extm, cmd, flags, lso_tun_fmt,
&sec, mark_flag, mark_fmt);
laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
@@ -1695,7 +1719,7 @@ again:
/* Move NIX desc to LMT/NIXTX area */
cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
/* Store sg list directly on lmt line */
- segdw = cn10k_nix_prepare_mseg(txq, tx_pkts[i], (uint64_t *)laddr,
+ segdw = cn10k_nix_prepare_mseg(txq, tx_pkts[i], &extm, (uint64_t *)laddr,
flags);
cn10k_nix_xmit_prepare_tstamp(txq, laddr, tx_pkts[i]->ol_flags,
segdw, flags);
@@ -1768,6 +1792,11 @@ again:
}
rte_io_wmb();
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena) {
+ cn10k_nix_free_extmbuf(extm);
+ extm = NULL;
+ }
+
if (left)
goto again;
@@ -1818,7 +1847,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
static __rte_always_inline uint16_t
cn10k_nix_prepare_mseg_vec_noff(struct cn10k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd,
+ struct rte_mbuf *m, struct rte_mbuf **extm, uint64_t *cmd,
uint64x2_t *cmd0, uint64x2_t *cmd1,
uint64x2_t *cmd2, uint64x2_t *cmd3,
const uint32_t flags)
@@ -1833,7 +1862,7 @@ cn10k_nix_prepare_mseg_vec_noff(struct cn10k_eth_txq *txq,
vst1q_u64(cmd + 2, *cmd1); /* sg */
}
- segdw = cn10k_nix_prepare_mseg(txq, m, cmd, flags);
+ segdw = cn10k_nix_prepare_mseg(txq, m, extm, cmd, flags);
if (flags & NIX_TX_OFFLOAD_TSTAMP_F)
vst1q_u64(cmd + segdw * 2 - 2, *cmd3);
@@ -1943,7 +1972,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
static __rte_always_inline uint8_t
cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
- struct rte_mbuf **mbufs, uint64x2_t *cmd0,
+ struct rte_mbuf **mbufs, struct rte_mbuf **extm, uint64x2_t *cmd0,
uint64x2_t *cmd1, uint64x2_t *cmd2,
uint64x2_t *cmd3, uint8_t *segdw,
uint64_t *lmt_addr, __uint128_t *data128,
@@ -1961,7 +1990,7 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
lmt_addr += 16;
off = 0;
}
- off += cn10k_nix_prepare_mseg_vec_noff(txq, mbufs[j],
+ off += cn10k_nix_prepare_mseg_vec_noff(txq, mbufs[j], extm,
lmt_addr + off * 2, &cmd0[j], &cmd1[j],
&cmd2[j], &cmd3[j], flags);
}
@@ -2114,14 +2143,14 @@ cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,
static __rte_always_inline void
cn10k_nix_xmit_store(struct cn10k_eth_txq *txq,
- struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
+ struct rte_mbuf *mbuf, struct rte_mbuf **extm, uint8_t segdw, uintptr_t laddr,
uint64x2_t cmd0, uint64x2_t cmd1, uint64x2_t cmd2,
uint64x2_t cmd3, const uint16_t flags)
{
uint8_t off;
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- cn10k_nix_prepare_mseg_vec_noff(txq, mbuf, LMT_OFF(laddr, 0, 0),
+ cn10k_nix_prepare_mseg_vec_noff(txq, mbuf, extm, LMT_OFF(laddr, 0, 0),
&cmd0, &cmd1, &cmd2, &cmd3,
flags);
return;
@@ -2205,6 +2234,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
__uint128_t data128;
uint64_t data[2];
} wd;
+ struct rte_mbuf *extm = NULL;
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && txq->tx_compl.ena)
handle_tx_completion_pkts(txq, flags & NIX_TX_VWQE_F);
@@ -3050,8 +3080,8 @@ again:
!(flags & NIX_TX_MULTI_SEG_F) &&
!(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
/* Set don't free bit if reference count > 1 */
- cn10k_nix_prefree_seg_vec(tx_pkts, txq, &senddesc01_w0, &senddesc23_w0,
- &senddesc01_w1, &senddesc23_w1);
+ cn10k_nix_prefree_seg_vec(tx_pkts, &extm, txq, &senddesc01_w0,
+ &senddesc23_w0, &senddesc01_w1, &senddesc23_w1);
} else if (!(flags & NIX_TX_MULTI_SEG_F) &&
!(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
/* Move mbufs to iova */
@@ -3123,7 +3153,7 @@ again:
&shift, &wd.data128, &next);
/* Store mbuf0 to LMTLINE/CPT NIXTX area */
- cn10k_nix_xmit_store(txq, tx_pkts[0], segdw[0], next,
+ cn10k_nix_xmit_store(txq, tx_pkts[0], &extm, segdw[0], next,
cmd0[0], cmd1[0], cmd2[0], cmd3[0],
flags);
@@ -3139,7 +3169,7 @@ again:
&shift, &wd.data128, &next);
/* Store mbuf1 to LMTLINE/CPT NIXTX area */
- cn10k_nix_xmit_store(txq, tx_pkts[1], segdw[1], next,
+ cn10k_nix_xmit_store(txq, tx_pkts[1], &extm, segdw[1], next,
cmd0[1], cmd1[1], cmd2[1], cmd3[1],
flags);
@@ -3155,7 +3185,7 @@ again:
&shift, &wd.data128, &next);
/* Store mbuf2 to LMTLINE/CPT NIXTX area */
- cn10k_nix_xmit_store(txq, tx_pkts[2], segdw[2], next,
+ cn10k_nix_xmit_store(txq, tx_pkts[2], &extm, segdw[2], next,
cmd0[2], cmd1[2], cmd2[2], cmd3[2],
flags);
@@ -3171,7 +3201,7 @@ again:
&shift, &wd.data128, &next);
/* Store mbuf3 to LMTLINE/CPT NIXTX area */
- cn10k_nix_xmit_store(txq, tx_pkts[3], segdw[3], next,
+ cn10k_nix_xmit_store(txq, tx_pkts[3], &extm, segdw[3], next,
cmd0[3], cmd1[3], cmd2[3], cmd3[3],
flags);
@@ -3179,7 +3209,7 @@ again:
uint8_t j;
segdw[4] = 8;
- j = cn10k_nix_prep_lmt_mseg_vector(txq, tx_pkts, cmd0, cmd1,
+ j = cn10k_nix_prep_lmt_mseg_vector(txq, tx_pkts, &extm, cmd0, cmd1,
cmd2, cmd3, segdw,
(uint64_t *)
LMT_OFF(laddr, lnum,
@@ -3329,6 +3359,11 @@ again:
}
rte_io_wmb();
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena) {
+ cn10k_nix_free_extmbuf(extm);
+ extm = NULL;
+ }
+
if (left)
goto again;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 94acbe64fa..018fae2eb7 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -82,16 +82,28 @@ cn9k_nix_tx_skeleton(struct cn9k_eth_txq *txq, uint64_t *cmd,
}
}
+static __rte_always_inline void
+cn9k_nix_free_extmbuf(struct rte_mbuf *m)
+{
+ struct rte_mbuf *m_next;
+ while (m != NULL) {
+ m_next = m->next;
+ rte_pktmbuf_free_seg(m);
+ m = m_next;
+ }
+}
+
static __rte_always_inline uint64_t
-cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq, struct nix_send_hdr_s *send_hdr,
- uint64_t *aura)
+cn9k_nix_prefree_seg(struct rte_mbuf *m, struct rte_mbuf **extm, struct cn9k_eth_txq *txq,
+ struct nix_send_hdr_s *send_hdr, uint64_t *aura)
{
struct rte_mbuf *prev;
uint32_t sqe_id;
if (RTE_MBUF_HAS_EXTBUF(m)) {
if (unlikely(txq->tx_compl.ena == 0)) {
- rte_pktmbuf_free_seg(m);
+ m->next = *extm;
+ *extm = m;
return 1;
}
if (send_hdr->w0.pnc) {
@@ -115,7 +127,7 @@ cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq, struct nix_se
#if defined(RTE_ARCH_ARM64)
/* Only called for first segments of single segmented mbufs */
static __rte_always_inline void
-cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq,
+cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct rte_mbuf **extm, struct cn9k_eth_txq *txq,
uint64x2_t *senddesc01_w0, uint64x2_t *senddesc23_w0,
uint64x2_t *senddesc01_w1, uint64x2_t *senddesc23_w1)
{
@@ -139,7 +151,8 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc01_w1, 0);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m0);
+ m0->next = *extm;
+ *extm = m0;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -169,7 +182,8 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc01_w1, 1);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m1);
+ m1->next = *extm;
+ *extm = m1;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -199,7 +213,8 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc23_w1, 0);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m2);
+ m2->next = *extm;
+ *extm = m2;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -229,7 +244,8 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq,
w1 = vgetq_lane_u64(*senddesc23_w1, 1);
w1 &= ~0xFFFF000000000000UL;
if (unlikely(!tx_compl_ena)) {
- rte_pktmbuf_free_seg(m3);
+ m3->next = *extm;
+ *extm = m3;
} else {
sqe_id = rte_atomic_fetch_add_explicit(&txq->tx_compl.sqe_id, 1,
rte_memory_order_relaxed);
@@ -310,10 +326,9 @@ cn9k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
}
static __rte_always_inline void
-cn9k_nix_xmit_prepare(struct cn9k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
- const uint64_t lso_tun_fmt, uint8_t mark_flag,
- uint64_t mark_fmt)
+cn9k_nix_xmit_prepare(struct cn9k_eth_txq *txq, struct rte_mbuf *m, struct rte_mbuf **extm,
+ uint64_t *cmd, const uint16_t flags, const uint64_t lso_tun_fmt,
+ uint8_t mark_flag, uint64_t mark_fmt)
{
uint8_t mark_off = 0, mark_vlan = 0, markptr = 0;
struct nix_send_ext_s *send_hdr_ext;
@@ -509,7 +524,7 @@ cn9k_nix_xmit_prepare(struct cn9k_eth_txq *txq,
* DF bit = 0 otherwise
*/
aura = send_hdr->w0.aura;
- send_hdr->w0.df = cn9k_nix_prefree_seg(m, txq, send_hdr, &aura);
+ send_hdr->w0.df = cn9k_nix_prefree_seg(m, extm, txq, send_hdr, &aura);
send_hdr->w0.aura = aura;
/* Ensuring mbuf fields which got updated in
* cnxk_nix_prefree_seg are written before LMTST.
@@ -600,8 +615,8 @@ cn9k_nix_xmit_submit_lmt_release(const rte_iova_t io_addr)
}
static __rte_always_inline uint16_t
-cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
+cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq, struct rte_mbuf *m, struct rte_mbuf **extm,
+ uint64_t *cmd, const uint16_t flags)
{
struct nix_send_hdr_s *send_hdr;
uint64_t prefree = 0, aura;
@@ -634,7 +649,7 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
aura = send_hdr->w0.aura;
- prefree = (cn9k_nix_prefree_seg(m, txq, send_hdr, &aura) << 55);
+ prefree = (cn9k_nix_prefree_seg(m, extm, txq, send_hdr, &aura) << 55);
send_hdr->w0.aura = aura;
sg_u |= prefree;
rte_io_wmb();
@@ -664,7 +679,7 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, NULL) << (i + 55));
+ sg_u |= (cn9k_nix_prefree_seg(m, extm, txq, send_hdr, NULL) << (i + 55));
/* Commit changes to mbuf */
rte_io_wmb();
}
@@ -748,6 +763,7 @@ cn9k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
const rte_iova_t io_addr = txq->io_addr;
uint64_t lso_tun_fmt = 0, mark_fmt = 0;
void *lmt_addr = txq->lmt_addr;
+ struct rte_mbuf *extm = NULL;
uint8_t mark_flag = 0;
uint16_t i;
@@ -778,13 +794,16 @@ cn9k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
rte_io_wmb();
for (i = 0; i < pkts; i++) {
- cn9k_nix_xmit_prepare(txq, tx_pkts[i], cmd, flags, lso_tun_fmt,
+ cn9k_nix_xmit_prepare(txq, tx_pkts[i], &extm, cmd, flags, lso_tun_fmt,
mark_flag, mark_fmt);
cn9k_nix_xmit_prepare_tstamp(txq, cmd, tx_pkts[i]->ol_flags, 4,
flags);
cn9k_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
}
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena)
+ cn9k_nix_free_extmbuf(extm);
+
/* Reduce the cached count */
txq->fc_cache_pkts -= pkts;
@@ -799,6 +818,7 @@ cn9k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
const rte_iova_t io_addr = txq->io_addr;
uint64_t lso_tun_fmt = 0, mark_fmt = 0;
void *lmt_addr = txq->lmt_addr;
+ struct rte_mbuf *extm = NULL;
uint8_t mark_flag = 0;
uint16_t segdw;
uint64_t i;
@@ -830,14 +850,17 @@ cn9k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
rte_io_wmb();
for (i = 0; i < pkts; i++) {
- cn9k_nix_xmit_prepare(txq, tx_pkts[i], cmd, flags, lso_tun_fmt,
+ cn9k_nix_xmit_prepare(txq, tx_pkts[i], &extm, cmd, flags, lso_tun_fmt,
mark_flag, mark_fmt);
- segdw = cn9k_nix_prepare_mseg(txq, tx_pkts[i], cmd, flags);
+ segdw = cn9k_nix_prepare_mseg(txq, tx_pkts[i], &extm, cmd, flags);
cn9k_nix_xmit_prepare_tstamp(txq, cmd, tx_pkts[i]->ol_flags,
segdw, flags);
cn9k_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
}
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena)
+ cn9k_nix_free_extmbuf(extm);
+
/* Reduce the cached count */
txq->fc_cache_pkts -= pkts;
@@ -885,7 +908,7 @@ cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
static __rte_always_inline uint8_t
cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd,
+ struct rte_mbuf *m, struct rte_mbuf **extm, uint64_t *cmd,
struct nix_send_hdr_s *send_hdr,
union nix_send_sg_s *sg, const uint32_t flags)
{
@@ -910,7 +933,7 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
aura = send_hdr->w0.aura;
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, &aura) << 55);
+ sg_u |= (cn9k_nix_prefree_seg(m, extm, txq, send_hdr, &aura) << 55);
send_hdr->w0.aura = aura;
}
/* Mark mempool object as "put" since it is freed by NIX */
@@ -935,7 +958,7 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m);
/* Set invert df if buffer is not to be freed by H/W */
if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, &aura) << (i + 55));
+ sg_u |= (cn9k_nix_prefree_seg(m, extm, txq, send_hdr, &aura) << (i + 55));
/* Mark mempool object as "put" since it is freed by NIX
*/
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -981,9 +1004,8 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq,
}
static __rte_always_inline uint8_t
-cn9k_nix_prepare_mseg_vec(struct cn9k_eth_txq *txq,
- struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
- uint64x2_t *cmd1, const uint32_t flags)
+cn9k_nix_prepare_mseg_vec(struct cn9k_eth_txq *txq, struct rte_mbuf *m, struct rte_mbuf **extm,
+ uint64_t *cmd, uint64x2_t *cmd0, uint64x2_t *cmd1, const uint32_t flags)
{
struct nix_send_hdr_s send_hdr;
struct rte_mbuf *cookie;
@@ -998,7 +1020,7 @@ cn9k_nix_prepare_mseg_vec(struct cn9k_eth_txq *txq,
send_hdr.w1.u = vgetq_lane_u64(cmd0[0], 1);
sg.u = vgetq_lane_u64(cmd1[0], 0);
aura = send_hdr.w0.aura;
- sg.u |= (cn9k_nix_prefree_seg(m, txq, &send_hdr, &aura) << 55);
+ sg.u |= (cn9k_nix_prefree_seg(m, extm, txq, &send_hdr, &aura) << 55);
send_hdr.w0.aura = aura;
cmd1[0] = vsetq_lane_u64(sg.u, cmd1[0], 0);
cmd0[0] = vsetq_lane_u64(send_hdr.w0.u, cmd0[0], 0);
@@ -1021,7 +1043,7 @@ cn9k_nix_prepare_mseg_vec(struct cn9k_eth_txq *txq,
send_hdr.w1.u = vgetq_lane_u64(cmd0[0], 1);
sg.u = vgetq_lane_u64(cmd1[0], 0);
- ret = cn9k_nix_prepare_mseg_vec_list(txq, m, cmd, &send_hdr, &sg, flags);
+ ret = cn9k_nix_prepare_mseg_vec_list(txq, m, extm, cmd, &send_hdr, &sg, flags);
cmd0[0] = vsetq_lane_u64(send_hdr.w0.u, cmd0[0], 0);
cmd0[0] = vsetq_lane_u64(send_hdr.w1.u, cmd0[0], 1);
@@ -1168,6 +1190,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
uint64_t *lmt_addr = txq->lmt_addr;
rte_iova_t io_addr = txq->io_addr;
uint64x2_t ltypes01, ltypes23;
+ struct rte_mbuf *extm = NULL;
uint64x2_t xtmp128, ytmp128;
uint64x2_t xmask01, xmask23;
uint64_t lmt_status, i;
@@ -1933,8 +1956,8 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
!(flags & NIX_TX_MULTI_SEG_F)) {
/* Set don't free bit if reference count > 1 */
- cn9k_nix_prefree_seg_vec(tx_pkts, txq, &senddesc01_w0, &senddesc23_w0,
- &senddesc01_w1, &senddesc23_w1);
+ cn9k_nix_prefree_seg_vec(tx_pkts, &extm, txq, &senddesc01_w0,
+ &senddesc23_w0, &senddesc01_w1, &senddesc23_w1);
/* Ensuring mbuf fields which got updated in
* cnxk_nix_prefree_seg are written before LMTST.
*/
@@ -1995,7 +2018,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
/* Build mseg list for each packet individually. */
for (j = 0; j < NIX_DESCS_PER_LOOP; j++)
segdw[j] = cn9k_nix_prepare_mseg_vec(txq,
- tx_pkts[j],
+ tx_pkts[j], &extm,
seg_list[j], &cmd0[j],
&cmd1[j], flags);
segdw[4] = 8;
@@ -2070,6 +2093,9 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
}
+ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && !txq->tx_compl.ena)
+ cn9k_nix_free_extmbuf(extm);
+
if (unlikely(pkts_left)) {
if (flags & NIX_TX_MULTI_SEG_F)
pkts += cn9k_nix_xmit_pkts_mseg(tx_queue, tx_pkts,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.384004321 +0800
+++ 0043-net-cnxk-improve-Tx-performance-for-SW-mbuf-free.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From f3d7cf8a4c7eedbf2bdfc19370d49bd2557717e6 Mon Sep 17 00:00:00 2001
+From 630dbc8a928ba12e93c534df9d7dfdd6ad4af371 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f3d7cf8a4c7eedbf2bdfc19370d49bd2557717e6 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -20,6 +22,5 @@
- doc/guides/rel_notes/release_24_03.rst | 1 +
- drivers/event/cnxk/cn10k_tx_worker.h | 8 ++-
- drivers/event/cnxk/cn9k_worker.h | 9 ++-
- drivers/net/cnxk/cn10k_tx.h | 97 ++++++++++++++++++--------
- drivers/net/cnxk/cn9k_tx.h | 88 +++++++++++++++--------
- 5 files changed, 136 insertions(+), 67 deletions(-)
+ drivers/event/cnxk/cn10k_tx_worker.h | 8 ++-
+ drivers/event/cnxk/cn9k_worker.h | 9 ++-
+ drivers/net/cnxk/cn10k_tx.h | 97 +++++++++++++++++++---------
+ drivers/net/cnxk/cn9k_tx.h | 88 ++++++++++++++++---------
+ 4 files changed, 135 insertions(+), 67 deletions(-)
@@ -27,12 +27,0 @@
-diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
-index 6f6cc06dbb..639de93b79 100644
---- a/doc/guides/rel_notes/release_24_03.rst
-+++ b/doc/guides/rel_notes/release_24_03.rst
-@@ -115,6 +115,7 @@ New Features
- * Added support for ``RTE_FLOW_ITEM_TYPE_PPPOES`` flow item.
- * Added support for ``RTE_FLOW_ACTION_TYPE_SAMPLE`` flow item.
- * Added support for Rx inject.
-+ * Optimized SW external mbuf free for better performance and avoid SQ corruption.
-
- * **Updated Marvell OCTEON EP driver.**
-
@@ -80 +69 @@
-index e8863e42fc..a8e998951c 100644
+index 0451157812..107265d54b 100644
@@ -83 +72 @@
-@@ -749,7 +749,7 @@ static __rte_always_inline uint16_t
+@@ -746,7 +746,7 @@ static __rte_always_inline uint16_t
@@ -92 +81 @@
-@@ -770,7 +770,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
+@@ -767,7 +767,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
@@ -101 +90 @@
-@@ -792,7 +792,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
+@@ -789,7 +789,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
@@ -110 +99 @@
-@@ -822,6 +822,9 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
+@@ -819,6 +819,9 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
@@ -121 +110 @@
-index 266c899a05..5c4b9e559e 100644
+index cc480d24e8..5dff578ba4 100644
@@ -124 +113 @@
-@@ -733,8 +733,19 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+@@ -784,8 +784,19 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
@@ -145 +134 @@
-@@ -742,7 +753,8 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
+@@ -793,7 +804,8 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
@@ -155 +144 @@
-@@ -766,7 +778,8 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
+@@ -817,7 +829,8 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
@@ -165 +154 @@
-@@ -790,7 +803,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
+@@ -841,7 +854,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
@@ -175 +164 @@
-@@ -820,7 +834,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
+@@ -871,7 +885,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
@@ -185 +174 @@
-@@ -850,7 +865,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
+@@ -901,7 +916,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
@@ -195 +184 @@
-@@ -880,7 +896,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
+@@ -931,7 +947,8 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq,
@@ -205 +194 @@
-@@ -962,9 +979,9 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
+@@ -1013,9 +1030,9 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
@@ -218 +207 @@
-@@ -1164,7 +1181,7 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
+@@ -1215,7 +1232,7 @@ cn10k_nix_xmit_prepare(struct cn10k_eth_txq *txq,
@@ -227 +216 @@
-@@ -1240,8 +1257,8 @@ cn10k_nix_xmit_prepare_tstamp(struct cn10k_eth_txq *txq, uintptr_t lmt_addr,
+@@ -1291,8 +1308,8 @@ cn10k_nix_xmit_prepare_tstamp(struct cn10k_eth_txq *txq, uintptr_t lmt_addr,
@@ -238 +227 @@
-@@ -1284,7 +1301,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1335,7 +1352,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -247 +236 @@
-@@ -1331,7 +1348,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
+@@ -1382,7 +1399,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
@@ -256 +245 @@
-@@ -1425,6 +1442,7 @@ cn10k_nix_xmit_pkts(void *tx_queue, uint64_t *ws, struct rte_mbuf **tx_pkts,
+@@ -1476,6 +1493,7 @@ cn10k_nix_xmit_pkts(void *tx_queue, uint64_t *ws, struct rte_mbuf **tx_pkts,
@@ -264 +253 @@
-@@ -1479,7 +1497,7 @@ again:
+@@ -1530,7 +1548,7 @@ again:
@@ -273 +262 @@
-@@ -1554,6 +1572,11 @@ again:
+@@ -1605,6 +1623,11 @@ again:
@@ -285 +274 @@
-@@ -1569,6 +1592,7 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, uint64_t *ws,
+@@ -1620,6 +1643,7 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, uint64_t *ws,
@@ -293 +282 @@
-@@ -1630,7 +1654,7 @@ again:
+@@ -1681,7 +1705,7 @@ again:
@@ -302 +291 @@
-@@ -1644,7 +1668,7 @@ again:
+@@ -1695,7 +1719,7 @@ again:
@@ -311 +300 @@
-@@ -1717,6 +1741,11 @@ again:
+@@ -1768,6 +1792,11 @@ again:
@@ -323 +312 @@
-@@ -1767,7 +1796,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
+@@ -1818,7 +1847,7 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,
@@ -332 +321 @@
-@@ -1782,7 +1811,7 @@ cn10k_nix_prepare_mseg_vec_noff(struct cn10k_eth_txq *txq,
+@@ -1833,7 +1862,7 @@ cn10k_nix_prepare_mseg_vec_noff(struct cn10k_eth_txq *txq,
@@ -341 +330 @@
-@@ -1892,7 +1921,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
+@@ -1943,7 +1972,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0,
@@ -350 +339 @@
-@@ -1910,7 +1939,7 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
+@@ -1961,7 +1990,7 @@ cn10k_nix_prep_lmt_mseg_vector(struct cn10k_eth_txq *txq,
@@ -359 +348 @@
-@@ -2063,14 +2092,14 @@ cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,
+@@ -2114,14 +2143,14 @@ cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,
@@ -376 +365 @@
-@@ -2154,6 +2183,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
+@@ -2205,6 +2234,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
@@ -384 +373 @@
-@@ -3003,8 +3033,8 @@ again:
+@@ -3050,8 +3080,8 @@ again:
@@ -395 +384 @@
-@@ -3076,7 +3106,7 @@ again:
+@@ -3123,7 +3153,7 @@ again:
@@ -404 +393 @@
-@@ -3092,7 +3122,7 @@ again:
+@@ -3139,7 +3169,7 @@ again:
@@ -413 +402 @@
-@@ -3108,7 +3138,7 @@ again:
+@@ -3155,7 +3185,7 @@ again:
@@ -422 +411 @@
-@@ -3124,7 +3154,7 @@ again:
+@@ -3171,7 +3201,7 @@ again:
@@ -431 +420 @@
-@@ -3132,7 +3162,7 @@ again:
+@@ -3179,7 +3209,7 @@ again:
@@ -440 +429 @@
-@@ -3282,6 +3312,11 @@ again:
+@@ -3329,6 +3359,11 @@ again:
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: fix aging poll frequency option in cnxk guide' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (41 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/cnxk: improve Tx performance for SW mbuf free' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5/hws: skip item when inserting rules by index' " Xueming Li
` (80 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Ankur Dwivedi; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2368f82fd89fc132b9d9b701c04860b2e9a9e307
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2368f82fd89fc132b9d9b701c04860b2e9a9e307 Mon Sep 17 00:00:00 2001
From: Ankur Dwivedi <adwivedi@marvell.com>
Date: Mon, 4 Mar 2024 20:00:06 +0530
Subject: [PATCH] doc: fix aging poll frequency option in cnxk guide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7fcf4e8a03c4b0d2c6cec11ddd18c7bf88ba0384 ]
The information for CNXK NPC MCAM aging poll frequency devargs is moved to
runtime config options section for ethdev device. Initially it was
incorrectly placed in runtime config options section for inline device.
Fixes: a44775505911 ("net/cnxk: support flow aging")
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com>
---
doc/guides/nics/cnxk.rst | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 9ec52e380f..28d54be16d 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -416,6 +416,18 @@ Runtime Config Options
With the above configuration, PMD would allocate meta buffers of size 512 for
inline inbound IPsec processing second pass.
+- ``NPC MCAM Aging poll frequency in seconds`` (default ``10``)
+
+ Poll frequency for aging control thread can be specified by
+ ``aging_poll_freq`` devargs parameter.
+
+ For example::
+
+ -a 0002:01:00.2,aging_poll_freq=50
+
+ With the above configuration, driver would poll for aging flows
+ every 50 seconds.
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
@@ -601,18 +613,6 @@ Runtime Config Options for inline device
With the above configuration, driver would poll for soft expiry events every
1000 usec.
-- ``NPC MCAM Aging poll frequency in seconds`` (default ``10``)
-
- Poll frequency for aging control thread can be specified by
- ``aging_poll_freq`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:01:00.2,aging_poll_freq=50
-
- With the above configuration, driver would poll for aging flows every 50
- seconds.
-
Debugging Options
-----------------
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.414337581 +0800
+++ 0044-doc-fix-aging-poll-frequency-option-in-cnxk-guide.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 7fcf4e8a03c4b0d2c6cec11ddd18c7bf88ba0384 Mon Sep 17 00:00:00 2001
+From 2368f82fd89fc132b9d9b701c04860b2e9a9e307 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7fcf4e8a03c4b0d2c6cec11ddd18c7bf88ba0384 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index f11fb59439..f5f296ee36 100644
+index 9ec52e380f..28d54be16d 100644
@@ -22 +24 @@
-@@ -419,6 +419,18 @@ Runtime Config Options
+@@ -416,6 +416,18 @@ Runtime Config Options
@@ -38 +40 @@
- - ``Rx Inject Enable inbound inline IPsec for second pass`` (default ``0``)
+ .. note::
@@ -40,2 +42,2 @@
- Rx packet inject feature for inbound inline IPsec processing can be enabled
-@@ -617,18 +629,6 @@ Runtime Config Options for inline device
+ Above devarg parameters are configurable per device, user needs to pass the
+@@ -601,18 +613,6 @@ Runtime Config Options for inline device
@@ -57 +59,2 @@
- - ``Rx Inject Enable inbound inline IPsec for second pass`` (default ``0``)
+ Debugging Options
+ -----------------
@@ -59 +61,0 @@
- Rx packet inject feature for inbound inline IPsec processing can be enabled
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: skip item when inserting rules by index' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (42 preceding siblings ...)
2024-04-13 12:48 ` patch 'doc: fix aging poll frequency option in cnxk guide' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5/hws: check not supported fields in VXLAN' " Xueming Li
` (79 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Itamar Gozlan; +Cc: Matan Azrad, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b80ca5960e391dc055f9571beaf0a2927929d6d3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b80ca5960e391dc055f9571beaf0a2927929d6d3 Mon Sep 17 00:00:00 2001
From: Itamar Gozlan <igozlan@nvidia.com>
Date: Sun, 18 Feb 2024 07:11:15 +0200
Subject: [PATCH] net/mlx5/hws: skip item when inserting rules by index
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e23edbc509f68b67b55ee6c06920402c9ac5db2b ]
The location of indexed rules is determined by the index, not the item
hash. A matcher test is added to prevent access to non-existent items.
This avoids unnecessary processing and potential segmentation faults.
Fixes: 405242c52dd5 ("net/mlx5/hws: add rule object")
Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_rule.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index fa19303b91..e39137a6ee 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -23,6 +23,9 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher,
*skip_rx = false;
*skip_tx = false;
+ if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+ return;
+
if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) {
v = items[mt->vport_item_id].spec;
vport = flow_hw_conv_port_id(v->port_id);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.439454649 +0800
+++ 0045-net-mlx5-hws-skip-item-when-inserting-rules-by-index.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From e23edbc509f68b67b55ee6c06920402c9ac5db2b Mon Sep 17 00:00:00 2001
+From b80ca5960e391dc055f9571beaf0a2927929d6d3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e23edbc509f68b67b55ee6c06920402c9ac5db2b ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: check not supported fields in VXLAN' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (43 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5/hws: skip item when inserting rules by index' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5/hws: fix VLAN item in non-relaxed mode' " Xueming Li
` (78 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Erez Shitrit; +Cc: Matan Azrad, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=21d51e8848c68523906da07edf69b41a283b6665
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 21d51e8848c68523906da07edf69b41a283b6665 Mon Sep 17 00:00:00 2001
From: Erez Shitrit <erezsh@nvidia.com>
Date: Sun, 18 Feb 2024 07:11:16 +0200
Subject: [PATCH] net/mlx5/hws: check not supported fields in VXLAN
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4506703cbe96185bf3593a222fb9f51f70de78af ]
Don't allow the user to mask over rsvd1 / rsvd2 fields which are not
supported.
Fixes: 28e69588f417 ("net/mlx5/hws: fix tunnel protocol checks")
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 7bd4ea560e..720d271179 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -1347,6 +1347,13 @@ mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd,
/* In order to match on VXLAN we must match on ether_type, ip_protocol
* and l4_dport.
*/
+ if (m && (m->rsvd0[0] != 0 || m->rsvd0[1] != 0 || m->rsvd0[2] != 0 ||
+ m->rsvd1 != 0)) {
+ DR_LOG(ERR, "reserved fields are not supported");
+ rte_errno = ENOTSUP;
+ return rte_errno;
+ }
+
if (!cd->relaxed) {
fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
if (!fc->tag_set) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.466062214 +0800
+++ 0046-net-mlx5-hws-check-not-supported-fields-in-VXLAN.patch 2024-04-13 20:43:04.957753984 +0800
@@ -1 +1 @@
-From 4506703cbe96185bf3593a222fb9f51f70de78af Mon Sep 17 00:00:00 2001
+From 21d51e8848c68523906da07edf69b41a283b6665 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4506703cbe96185bf3593a222fb9f51f70de78af ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 79d98bbf78..8b8757ecac 100644
+index 7bd4ea560e..720d271179 100644
@@ -22,4 +24,4 @@
-@@ -1414,6 +1414,13 @@ mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd,
- struct mlx5dr_definer_fc *fc;
- bool inner = cd->tunnel;
-
+@@ -1347,6 +1347,13 @@ mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd,
+ /* In order to match on VXLAN we must match on ether_type, ip_protocol
+ * and l4_dport.
+ */
@@ -33,3 +35,3 @@
- if (inner) {
- DR_LOG(ERR, "Inner VXLAN item not supported");
- rte_errno = ENOTSUP;
+ if (!cd->relaxed) {
+ fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
+ if (!fc->tag_set) {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: fix VLAN item in non-relaxed mode' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (44 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5/hws: check not supported fields in VXLAN' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix use after free when releasing Tx queues' " Xueming Li
` (77 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Hamdan Igbaria; +Cc: Matan Azrad, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a06ab8044ac854dff30f8705f30baf2a8f7e61b2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a06ab8044ac854dff30f8705f30baf2a8f7e61b2 Mon Sep 17 00:00:00 2001
From: Hamdan Igbaria <hamdani@nvidia.com>
Date: Sun, 18 Feb 2024 07:11:20 +0200
Subject: [PATCH] net/mlx5/hws: fix VLAN item in non-relaxed mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0aacd886e93df862f010a45ed333d08355f4bca8 ]
If a VLAN item was passed with null mask, the item handler would
return immediately and thus won't set default values for non relax
mode.
Also change the non relax default set to single-tagged (CVLAN).
Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer")
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 720d271179..f047ffdab3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -183,6 +183,7 @@ struct mlx5dr_definer_conv_data {
X(SET, ib_l4_udp_port, UDP_ROCEV2_PORT, rte_flow_item_ib_bth) \
X(SET, ib_l4_opcode, v->hdr.opcode, rte_flow_item_ib_bth) \
X(SET, ib_l4_bth_a, v->hdr.a, rte_flow_item_ib_bth) \
+ X(SET, cvlan, STE_CVLAN, rte_flow_item_vlan) \
/* Item set function format */
#define X(set_type, func_name, value, item_type) \
@@ -769,6 +770,15 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
struct mlx5dr_definer_fc *fc;
bool inner = cd->tunnel;
+ if (!cd->relaxed) {
+ /* Mark packet as tagged (CVLAN) */
+ fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)];
+ fc->item_idx = item_idx;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ fc->tag_set = &mlx5dr_definer_cvlan_set;
+ DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
+ }
+
if (!m)
return 0;
@@ -777,8 +787,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
return rte_errno;
}
- if (!cd->relaxed || m->has_more_vlan) {
- /* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/
+ if (m->has_more_vlan) {
fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)];
fc->item_idx = item_idx;
fc->tag_mask_set = &mlx5dr_definer_ones_set;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.499237470 +0800
+++ 0047-net-mlx5-hws-fix-VLAN-item-in-non-relaxed-mode.patch 2024-04-13 20:43:04.967753971 +0800
@@ -1 +1 @@
-From 0aacd886e93df862f010a45ed333d08355f4bca8 Mon Sep 17 00:00:00 2001
+From a06ab8044ac854dff30f8705f30baf2a8f7e61b2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0aacd886e93df862f010a45ed333d08355f4bca8 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index eb788a772a..b8a546989a 100644
+index 720d271179..f047ffdab3 100644
@@ -24 +26,2 @@
-@@ -223,6 +223,7 @@ struct mlx5dr_definer_conv_data {
+@@ -183,6 +183,7 @@ struct mlx5dr_definer_conv_data {
+ X(SET, ib_l4_udp_port, UDP_ROCEV2_PORT, rte_flow_item_ib_bth) \
@@ -26 +28,0 @@
- X(SET, random_number, v->value, rte_flow_item_random) \
@@ -32 +34 @@
-@@ -864,6 +865,15 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
+@@ -769,6 +770,15 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
@@ -48 +50 @@
-@@ -872,8 +882,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
+@@ -777,8 +787,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix use after free when releasing Tx queues' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (45 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5/hws: fix VLAN item in non-relaxed mode' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix error packets drop in regular Rx' " Xueming Li
` (76 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Pengfei Sun; +Cc: Yunjian Wang, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=213cb8806857a4dad0b870c4b81b055e616eaa90
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 213cb8806857a4dad0b870c4b81b055e616eaa90 Mon Sep 17 00:00:00 2001
From: Pengfei Sun <sunpengfei16@huawei.com>
Date: Tue, 20 Feb 2024 17:31:39 +0800
Subject: [PATCH] net/mlx5: fix use after free when releasing Tx queues
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b805b7c451f1ee5bafa5628ee67f3a495f6a8682 ]
In function mlx5_dev_configure, dev->data->tx_queues is assigned
to priv->txqs. When a member is removed from a bond, the function
eth_dev_tx_queue_config is called to release dev->data->tx_queues.
However, function mlx5_dev_close will access priv->txqs again and
cause the use after free problem.
In function mlx5_dev_close, before free priv->txqs, we add a check
that dev->data->tx_queues is not NULL.
build/app/dpdk-testpmd -c7 -a 0000:08:00.2 -- -i --nb-cores=2
--total-num-mbufs=2048
testpmd> port stop 0
testpmd> create bonding device 4 0
testpmd> add bonding member 0 1
testpmd> remove bonding member 0 1
testpmd> quit
ASan reports:
==2571911==ERROR: AddressSanitizer: heap-use-after-free on address
0x000174529880 at pc 0x0000113c8440 bp 0xffffefae0ea0 sp 0xffffefae0eb0
READ of size 8 at 0x000174529880 thread T0
#0 0x113c843c in mlx5_txq_release ../drivers/net/mlx5/mlx5_txq.c:
1203
#1 0xffdb53c in mlx5_dev_close ../drivers/net/mlx5/mlx5.c:2286
#2 0xe12dc0 in rte_eth_dev_close ../lib/ethdev/rte_ethdev.c:1877
#3 0x6bac1c in close_port ../app/test-pmd/testpmd.c:3540
#4 0x6bc320 in pmd_test_exit ../app/test-pmd/testpmd.c:3808
#5 0x6c1a94 in main ../app/test-pmd/testpmd.c:4759
#6 0xffff9328f038 (/usr/lib64/libc.so.6+0x2b038)
#7 0xffff9328f110 in __libc_start_main (/usr/lib64/libc.so.6+
0x2b110)
Fixes: 6e78005a9b30 ("net/mlx5: add reference counter on DPDK Tx queues")
Reported-by: Yunjian Wang <wangyunjian@huawei.com>
Signed-off-by: Pengfei Sun <sunpengfei16@huawei.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 3a182de248..6b5a4da430 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2279,7 +2279,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_free(priv->rxq_privs);
priv->rxq_privs = NULL;
}
- if (priv->txqs != NULL) {
+ if (priv->txqs != NULL && dev->data->tx_queues != NULL) {
/* XXX race condition if mlx5_tx_burst() is still running. */
rte_delay_us_sleep(1000);
for (i = 0; (i != priv->txqs_n); ++i)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.527472634 +0800
+++ 0048-net-mlx5-fix-use-after-free-when-releasing-Tx-queues.patch 2024-04-13 20:43:04.967753971 +0800
@@ -1 +1 @@
-From b805b7c451f1ee5bafa5628ee67f3a495f6a8682 Mon Sep 17 00:00:00 2001
+From 213cb8806857a4dad0b870c4b81b055e616eaa90 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b805b7c451f1ee5bafa5628ee67f3a495f6a8682 ]
@@ -40 +42,0 @@
-Cc: stable@dpdk.org
@@ -50 +52 @@
-index 881c42a97a..f2ca0ae4c2 100644
+index 3a182de248..6b5a4da430 100644
@@ -53 +55 @@
-@@ -2362,7 +2362,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
+@@ -2279,7 +2279,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix error packets drop in regular Rx' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (46 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix use after free when releasing Tx queues' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: prevent querying aged flows on uninit port' " Xueming Li
` (75 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bfa6cbba4c6cffa1e3a27c17a6f8e89644f340a1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bfa6cbba4c6cffa1e3a27c17a6f8e89644f340a1 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Tue, 20 Feb 2024 13:45:20 +0200
Subject: [PATCH] net/mlx5: fix error packets drop in regular Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ef296e8f6140ea469b50c7bfe73501b1c9ef86e1 ]
When packet gets received with error it is reported in CQE
structure and PMD analyzes the error syndrome and provides
two options - either reset the entire queue for the critical
errors, or just ignore the packet.
The non-vectorized rx_burst did not ignore the non-critical
error packets, and in case of packet length exceeding the
mbuf data buffer length it took the next element in the queue
WQE ring, resulting in CQE/WQE consume indices synchronization
lost.
Fixes: aa67ed308458 ("net/mlx5: ignore non-critical syndromes for Rx queue")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_rx.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 5bf1a679b2..cc087348a4 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -613,7 +613,8 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
* @param mprq
* Indication if it is called from MPRQ.
* @return
- * 0 in case of empty CQE, MLX5_REGULAR_ERROR_CQE_RET in case of error CQE,
+ * 0 in case of empty CQE,
+ * MLX5_REGULAR_ERROR_CQE_RET in case of error CQE,
* MLX5_CRITICAL_ERROR_CQE_RET in case of error CQE lead to Rx queue reset,
* otherwise the packet size in regular RxQ,
* and striding byte count format in mprq case.
@@ -697,6 +698,11 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
if (ret == MLX5_RECOVERY_ERROR_RET ||
ret == MLX5_RECOVERY_COMPLETED_RET)
return MLX5_CRITICAL_ERROR_CQE_RET;
+ if (!mprq && ret == MLX5_RECOVERY_IGNORE_RET) {
+ *skip_cnt = 1;
+ ++rxq->cq_ci;
+ return MLX5_ERROR_CQE_MASK;
+ }
} else {
return 0;
}
@@ -971,19 +977,18 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_mask];
len = mlx5_rx_poll_len(rxq, cqe, cqe_n, cqe_mask, &mcqe, &skip_cnt, false);
if (unlikely(len & MLX5_ERROR_CQE_MASK)) {
+ /* We drop packets with non-critical errors */
+ rte_mbuf_raw_free(rep);
if (len == MLX5_CRITICAL_ERROR_CQE_RET) {
- rte_mbuf_raw_free(rep);
rq_ci = rxq->rq_ci << sges_n;
break;
}
+ /* Skip specified amount of error CQEs packets */
rq_ci >>= sges_n;
rq_ci += skip_cnt;
rq_ci <<= sges_n;
- idx = rq_ci & wqe_mask;
- wqe = &((volatile struct mlx5_wqe_data_seg *)rxq->wqes)[idx];
- seg = (*rxq->elts)[idx];
- cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_mask];
- len = len & ~MLX5_ERROR_CQE_MASK;
+ MLX5_ASSERT(!pkt);
+ continue;
}
if (len == 0) {
rte_mbuf_raw_free(rep);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.552569301 +0800
+++ 0049-net-mlx5-fix-error-packets-drop-in-regular-Rx.patch 2024-04-13 20:43:04.967753971 +0800
@@ -1 +1 @@
-From ef296e8f6140ea469b50c7bfe73501b1c9ef86e1 Mon Sep 17 00:00:00 2001
+From bfa6cbba4c6cffa1e3a27c17a6f8e89644f340a1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ef296e8f6140ea469b50c7bfe73501b1c9ef86e1 ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: prevent querying aged flows on uninit port' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (47 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix error packets drop in regular Rx' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5/hws: fix VLAN inner type' " Xueming Li
` (74 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bing Zhao; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=99be466799c897e5746c1cf65f2ffe599286702f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 99be466799c897e5746c1cf65f2ffe599286702f Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Wed, 21 Feb 2024 05:23:22 +0200
Subject: [PATCH] net/mlx5: prevent querying aged flows on uninit port
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3b1ccc5833b0bd89931e90c5e843134839ba46ce ]
In HWS template API, the aging mechanism doesn't support shared host
mode now. When the guest's counter is set to 0, the aging won't be
initialized.
The current implementation didn't prevent the user from querying the
aged flows on the uninitialized port. The access of invalid pointers
would cause a crash.
With this commit, the flag of the per port aging initialization will
be checked. This would help to get rid of the invalid accessing.
Fixes: 04a4de756e14 ("net/mlx5: support flow age action with HWS")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index bda1ecf121..25fef8c086 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10746,6 +10746,10 @@ flow_hw_get_q_aged_flows(struct rte_eth_dev *dev, uint32_t queue_id,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "empty context");
+ if (!priv->hws_age_req)
+ return rte_flow_error_set(error, ENOENT,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "No aging initialized");
if (priv->hws_strict_queue) {
if (queue_id >= age_info->hw_q_age->nb_rings)
return rte_flow_error_set(error, EINVAL,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.577389468 +0800
+++ 0050-net-mlx5-prevent-querying-aged-flows-on-uninit-port.patch 2024-04-13 20:43:04.967753971 +0800
@@ -1 +1 @@
-From 3b1ccc5833b0bd89931e90c5e843134839ba46ce Mon Sep 17 00:00:00 2001
+From 99be466799c897e5746c1cf65f2ffe599286702f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3b1ccc5833b0bd89931e90c5e843134839ba46ce ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index bd5c46b6ad..91d1c59fbd 100644
+index bda1ecf121..25fef8c086 100644
@@ -30 +32 @@
-@@ -11219,6 +11219,10 @@ flow_hw_get_q_aged_flows(struct rte_eth_dev *dev, uint32_t queue_id,
+@@ -10746,6 +10746,10 @@ flow_hw_get_q_aged_flows(struct rte_eth_dev *dev, uint32_t queue_id,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: fix VLAN inner type' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (48 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: prevent querying aged flows on uninit port' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix condition of LACP miss flow' " Xueming Li
` (73 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Hamdan Igbaria; +Cc: Erez Shitrit, Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=17f644b4a810b7938d2a63317a5b12454302320e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 17f644b4a810b7938d2a63317a5b12454302320e Mon Sep 17 00:00:00 2001
From: Hamdan Igbaria <hamdani@nvidia.com>
Date: Wed, 21 Feb 2024 08:28:11 +0200
Subject: [PATCH] net/mlx5/hws: fix VLAN inner type
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d1d350d868439357e750297d5719670f706facc2 ]
Set the correct VLAN inner_type value, till today the
once the VLAN inner_type field was set, an incorrect
value was taken instead of the inner_type field.
Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer")
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index f047ffdab3..34a5174aa8 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -184,6 +184,7 @@ struct mlx5dr_definer_conv_data {
X(SET, ib_l4_opcode, v->hdr.opcode, rte_flow_item_ib_bth) \
X(SET, ib_l4_bth_a, v->hdr.a, rte_flow_item_ib_bth) \
X(SET, cvlan, STE_CVLAN, rte_flow_item_vlan) \
+ X(SET_BE16, inner_type, v->inner_type, rte_flow_item_vlan) \
/* Item set function format */
#define X(set_type, func_name, value, item_type) \
@@ -805,7 +806,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
if (m->hdr.eth_proto) {
fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)];
fc->item_idx = item_idx;
- fc->tag_set = &mlx5dr_definer_eth_type_set;
+ fc->tag_set = &mlx5dr_definer_inner_type_set;
DR_CALC_SET(fc, eth_l2, l3_ethertype, inner);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.614457920 +0800
+++ 0051-net-mlx5-hws-fix-VLAN-inner-type.patch 2024-04-13 20:43:04.967753971 +0800
@@ -1 +1 @@
-From d1d350d868439357e750297d5719670f706facc2 Mon Sep 17 00:00:00 2001
+From 17f644b4a810b7938d2a63317a5b12454302320e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d1d350d868439357e750297d5719670f706facc2 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index c1508e6b53..e036aca781 100644
+index f047ffdab3..34a5174aa8 100644
@@ -24,2 +26,2 @@
-@@ -224,6 +224,7 @@ struct mlx5dr_definer_conv_data {
- X(SET, random_number, v->value, rte_flow_item_random) \
+@@ -184,6 +184,7 @@ struct mlx5dr_definer_conv_data {
+ X(SET, ib_l4_opcode, v->hdr.opcode, rte_flow_item_ib_bth) \
@@ -32 +34 @@
-@@ -990,7 +991,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
+@@ -805,7 +806,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix condition of LACP miss flow' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (49 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5/hws: fix VLAN inner type' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix conntrack action handle representation' " Xueming Li
` (72 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bing Zhao; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a5d0545e5d9d61716e1c22598db5ecf7f1bc0d3f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a5d0545e5d9d61716e1c22598db5ecf7f1bc0d3f Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Wed, 21 Feb 2024 08:49:48 +0200
Subject: [PATCH] net/mlx5: fix condition of LACP miss flow
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 87e4384d2662f520bfae28ae14dbff2429c6923d ]
The LACP traffic is only related to the bond interface. The default
miss flow to redirect the LACP traffic with ethertype 0x8809 to the
kernel driver should only be created on the bond device.
This commit will:
1. remove the incorrect assertion of the port role.
2. skip the resource allocation and flow rule creation on the
representor port.
Fixes: 0f0ae73a3287 ("net/mlx5: add parameter for LACP packets control")
Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 3 +--
drivers/net/mlx5/mlx5_trigger.c | 8 ++++----
2 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 25fef8c086..c748e48e31 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8529,7 +8529,7 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
}
}
/* Create LACP default miss table. */
- if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0) {
+ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) {
lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error);
if (!lacp_rx_items_tmpl) {
DRV_LOG(ERR, "port %u failed to create pattern template"
@@ -12222,7 +12222,6 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
.type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX,
};
- MLX5_ASSERT(priv->master);
if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl)
return 0;
return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0,
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 5ac25d7e2d..f8d67282ce 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1524,7 +1524,7 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev)
}
if (priv->isolated)
return 0;
- if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0)
+ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master)
if (mlx5_flow_hw_lacp_rx_flow(dev))
goto error;
if (dev->data->promiscuous)
@@ -1632,14 +1632,14 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
DRV_LOG(INFO, "port %u FDB default rule is disabled",
dev->data->port_id);
}
- if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0) {
+ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) {
ret = mlx5_flow_lacp_miss(dev);
if (ret)
DRV_LOG(INFO, "port %u LACP rule cannot be created - "
"forward LACP to kernel.", dev->data->port_id);
else
- DRV_LOG(INFO, "LACP traffic will be missed in port %u."
- , dev->data->port_id);
+ DRV_LOG(INFO, "LACP traffic will be missed in port %u.",
+ dev->data->port_id);
}
if (priv->isolated)
return 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.641871884 +0800
+++ 0052-net-mlx5-fix-condition-of-LACP-miss-flow.patch 2024-04-13 20:43:04.977753958 +0800
@@ -1 +1 @@
-From 87e4384d2662f520bfae28ae14dbff2429c6923d Mon Sep 17 00:00:00 2001
+From a5d0545e5d9d61716e1c22598db5ecf7f1bc0d3f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 87e4384d2662f520bfae28ae14dbff2429c6923d ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 91d1c59fbd..e7f1b5f551 100644
+index 25fef8c086..c748e48e31 100644
@@ -30 +32 @@
-@@ -8934,7 +8934,7 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
+@@ -8529,7 +8529,7 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
@@ -39 +41 @@
-@@ -12761,7 +12761,6 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
+@@ -12222,7 +12222,6 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix conntrack action handle representation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (50 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix condition of LACP miss flow' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix connection tracking action validation' " Xueming Li
` (71 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1d65510ff679b97cd6689ca28633fea9af4e0042
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1d65510ff679b97cd6689ca28633fea9af4e0042 Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Tue, 27 Feb 2024 15:52:21 +0200
Subject: [PATCH] net/mlx5: fix conntrack action handle representation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4487a79277a11bd5e78f57234b29b77b62f4e653 ]
In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded in 32-bit unsigned integers as follows:
- Bits 31-29 - indirect action type.
- Bits 28-25 - port on which connection tracking action was created.
- Bits 24-0 - index of connection tracking object.
Macro defining a bit shift for owner part in this representation
was incorrectly defined as 22. This patch fixes that, as well as
aligns documented limitations.
Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
doc/guides/nics/mlx5.rst | 4 ++--
drivers/net/mlx5/mlx5_flow.h | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 6b52fb93c5..d0ebc101b4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -708,8 +708,8 @@ Limitations
- Cannot co-exist with ASO meter, ASO age action in a single flow rule.
- Flow rules insertion rate and memory consumption need more optimization.
- - 256 ports maximum.
- - 4M connections maximum with ``dv_flow_en`` 1 mode. 16M with ``dv_flow_en`` 2.
+ - 16 ports maximum.
+ - 32M connections maximum.
- Multi-thread flow insertion:
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 6dde9de688..edc273c518 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -77,7 +77,7 @@ enum mlx5_indirect_type {
/* Now, the maximal ports will be supported is 16, action number is 32M. */
#define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x10
-#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 22
+#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
#define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)
/* 29-31: type, 25-28: owner port, 0-24: index */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.674066242 +0800
+++ 0053-net-mlx5-fix-conntrack-action-handle-representation.patch 2024-04-13 20:43:04.977753958 +0800
@@ -1 +1 @@
-From 4487a79277a11bd5e78f57234b29b77b62f4e653 Mon Sep 17 00:00:00 2001
+From 1d65510ff679b97cd6689ca28633fea9af4e0042 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4487a79277a11bd5e78f57234b29b77b62f4e653 ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index 6dce4f1c98..de286c67c8 100644
+index 6b52fb93c5..d0ebc101b4 100644
@@ -32 +34 @@
-@@ -814,8 +814,8 @@ Limitations
+@@ -708,8 +708,8 @@ Limitations
@@ -44 +46 @@
-index a4d0ff7b13..b4bf96cd64 100644
+index 6dde9de688..edc273c518 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix connection tracking action validation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (51 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix conntrack action handle representation' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix HWS registers initialization' " Xueming Li
` (70 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=527857d5c23773f18c6bc166c227f6ded034d6e2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 527857d5c23773f18c6bc166c227f6ded034d6e2 Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Tue, 27 Feb 2024 15:52:22 +0200
Subject: [PATCH] net/mlx5: fix connection tracking action validation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit bd2db022c1ad154cd996b7b858c9f04f5a458416 ]
In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded as 32-bit unsigned integers, where port ID is stored
in bits 28-25. Because of this, connection tracking flow actions
cannot be created on ports with IDs higher than 15.
This patch adds missing validation.
Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 9 +++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 7 +++++++
2 files changed, 16 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 79fde3de8e..c0d9e4fb82 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -13629,6 +13629,13 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev,
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"Connection is not supported");
+ if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "CT supports port indexes up to "
+ RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+ return 0;
+ }
idx = flow_dv_aso_ct_alloc(dev, error);
if (!idx)
return rte_flow_error_set(error, rte_errno,
@@ -16321,6 +16328,8 @@ flow_dv_action_create(struct rte_eth_dev *dev,
case RTE_FLOW_ACTION_TYPE_CONNTRACK:
ret = flow_dv_translate_create_conntrack(dev, action->conf,
err);
+ if (!ret)
+ break;
idx = MLX5_INDIRECT_ACT_CT_GEN_IDX(PORT_ID(priv), ret);
break;
default:
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index c748e48e31..4927975461 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -9857,6 +9857,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
"CT is not enabled");
return 0;
}
+ if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "CT supports port indexes up to "
+ RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+ return 0;
+ }
ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
if (!ct) {
rte_flow_error_set(error, rte_errno,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.700172408 +0800
+++ 0054-net-mlx5-fix-connection-tracking-action-validation.patch 2024-04-13 20:43:04.987753945 +0800
@@ -1 +1 @@
-From bd2db022c1ad154cd996b7b858c9f04f5a458416 Mon Sep 17 00:00:00 2001
+From 527857d5c23773f18c6bc166c227f6ded034d6e2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit bd2db022c1ad154cd996b7b858c9f04f5a458416 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 75a8a223ab..ddf19e9a51 100644
+index 79fde3de8e..c0d9e4fb82 100644
@@ -26 +28 @@
-@@ -13889,6 +13889,13 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev,
+@@ -13629,6 +13629,13 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev,
@@ -40 +42 @@
-@@ -16586,6 +16593,8 @@ flow_dv_action_create(struct rte_eth_dev *dev,
+@@ -16321,6 +16328,8 @@ flow_dv_action_create(struct rte_eth_dev *dev,
@@ -50 +52 @@
-index c8440262d8..43bde71c93 100644
+index c748e48e31..4927975461 100644
@@ -53 +55 @@
-@@ -10346,6 +10346,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -9857,6 +9857,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix HWS registers initialization' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (52 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix connection tracking action validation' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5/hws: enable multiple integrity items' " Xueming Li
` (69 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Bing Zhao; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ca1084cd4891d179c6a2a8e595358a9045fd3c9b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ca1084cd4891d179c6a2a8e595358a9045fd3c9b Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Tue, 27 Feb 2024 17:26:27 +0200
Subject: [PATCH] net/mlx5: fix HWS registers initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1beeb6d71fcc2fa73d149071c6880573d78fa071 ]
The method to initialize tag registers by using capability bits is
not supported on some old NICs. In the meanwhile, the HWS for flow
rule insertion is not supported either. There is no need to
initialize HWS only resource on the old NICs.
Fixes: 48041ccbaa8d ("net/mlx5: initialize HWS flow registers in shared context")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6b5a4da430..df0383078d 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1689,7 +1689,8 @@ mlx5_init_shared_dev_registers(struct mlx5_dev_ctx_shared *sh)
} else {
DRV_LOG(DEBUG, "ASO register: NONE");
}
- mlx5_init_hws_flow_tags_registers(sh);
+ if (sh->config.dv_flow_en == 2)
+ mlx5_init_hws_flow_tags_registers(sh);
}
/**
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.743432351 +0800
+++ 0055-net-mlx5-fix-HWS-registers-initialization.patch 2024-04-13 20:43:04.987753945 +0800
@@ -1 +1 @@
-From 1beeb6d71fcc2fa73d149071c6880573d78fa071 Mon Sep 17 00:00:00 2001
+From ca1084cd4891d179c6a2a8e595358a9045fd3c9b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1beeb6d71fcc2fa73d149071c6880573d78fa071 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index f2ca0ae4c2..4f7304f099 100644
+index 6b5a4da430..df0383078d 100644
@@ -24 +26 @@
-@@ -1690,7 +1690,8 @@ mlx5_init_shared_dev_registers(struct mlx5_dev_ctx_shared *sh)
+@@ -1689,7 +1689,8 @@ mlx5_init_shared_dev_registers(struct mlx5_dev_ctx_shared *sh)
@@ -33 +35 @@
- static struct mlx5_physical_device *
+ /**
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: enable multiple integrity items' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (53 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix HWS registers initialization' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix VLAN handling in meter split' " Xueming Li
` (68 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Michael Baum; +Cc: Erez Shitrit, Matan Azrad, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=86c66608c2ac4909fc88f28b26b90ae9a95ed8f1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 86c66608c2ac4909fc88f28b26b90ae9a95ed8f1 Mon Sep 17 00:00:00 2001
From: Michael Baum <michaelba@nvidia.com>
Date: Wed, 28 Feb 2024 11:50:55 +0200
Subject: [PATCH] net/mlx5/hws: enable multiple integrity items
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8c178ac8ce81ab0cbd9e90fd7eb4a2b47fca7c2b ]
The integrity item uses the DW "oks1" in header layout. It includes the
all supported bits for both inner and outer. When item is integrity
type, the relevant bits are turned on and all DW is submitted.
When user provides more then single integrity item in same pattern, the
last one overrides the values were submitted before. This is problematic
when user wants to match integrity bits for both inner and outer in same
pattern, he cannot merge them into single item since rte_flow API
provides encapsulation level field to match either inner or outer.
This patch avoids the overriding values, when "oks1" is submitted,
operator "or" is used instead of regular set.
Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 34a5174aa8..c6918ef4f3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -41,6 +41,10 @@
(bit_off))); \
} while (0)
+/* Getter function based on bit offset and mask, for 32bit DW*/
+#define DR_GET_32(p, byte_off, bit_off, mask) \
+ ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + ((byte_off) / 4))) >> (bit_off)) & (mask))
+
/* Setter function based on bit offset and mask */
#define DR_SET(p, v, byte_off, bit_off, mask) \
do { \
@@ -379,7 +383,7 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc,
{
bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I);
const struct rte_flow_item_integrity *v = item_spec;
- uint32_t ok1_bits = 0;
+ uint32_t ok1_bits = DR_GET_32(tag, fc->byte_off, fc->bit_off, fc->bit_mask);
if (v->l3_ok)
ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) :
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.767693920 +0800
+++ 0056-net-mlx5-hws-enable-multiple-integrity-items.patch 2024-04-13 20:43:04.987753945 +0800
@@ -1 +1 @@
-From 8c178ac8ce81ab0cbd9e90fd7eb4a2b47fca7c2b Mon Sep 17 00:00:00 2001
+From 86c66608c2ac4909fc88f28b26b90ae9a95ed8f1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8c178ac8ce81ab0cbd9e90fd7eb4a2b47fca7c2b ]
@@ -20 +22,0 @@
-Cc: stable@dpdk.org
@@ -30 +32 @@
-index e036aca781..0e15aafb8a 100644
+index 34a5174aa8..c6918ef4f3 100644
@@ -33 +35 @@
-@@ -44,6 +44,10 @@
+@@ -41,6 +41,10 @@
@@ -44 +46 @@
-@@ -509,7 +513,7 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc,
+@@ -379,7 +383,7 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix VLAN handling in meter split' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (54 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5/hws: enable multiple integrity items' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix parameters verification in HWS table create' " Xueming Li
` (67 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0198b11a1109696dff05344fde67fb7a4f7e26d7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0198b11a1109696dff05344fde67fb7a4f7e26d7 Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Tue, 27 Feb 2024 14:58:15 +0100
Subject: [PATCH] net/mlx5: fix VLAN handling in meter split
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5d2301a222d77e7bac3a085aa17f8ef7a3af7ffe ]
On the attempt to create a flow rule with:
- matching on REPRESENTED_PORT,
- matching on outer VLAN tag,
- matching on inner VLAN tag,
- METER action,
flow splitting mechanism for handling metering flows was causing
memory corruption. It was assumed that suffix flow will have a single
VLAN item (used for translation of OF_PUSH_VLAN/OF_SET_VLAN_VID
actions), however during flow_meter_split_prep() 2 VLAN items were
parsed. This caused the buffer overflow on allocated
suffix flow item buffer.
This patch fixes this overflow, by account for number of VLAN items
in flow rule pattern when allocating items for suffix flow.
Fixes: 50f576d657d7 ("net/mlx5: fix VLAN actions in meter")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 60 +++++++++++++++++++++++-------------
1 file changed, 39 insertions(+), 21 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3e31945f99..ee210549e7 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -5817,8 +5817,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
struct mlx5_rte_flow_item_tag *tag_item_spec;
struct mlx5_rte_flow_item_tag *tag_item_mask;
uint32_t tag_id = 0;
- struct rte_flow_item *vlan_item_dst = NULL;
- const struct rte_flow_item *vlan_item_src = NULL;
+ bool vlan_actions;
+ struct rte_flow_item *orig_sfx_items = sfx_items;
const struct rte_flow_item *orig_items = items;
struct rte_flow_action *hw_mtr_action;
struct rte_flow_action *action_pre_head = NULL;
@@ -5835,6 +5835,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
/* Prepare the suffix subflow items. */
tag_item = sfx_items++;
+ tag_item->type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_TAG;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
int item_type = items->type;
@@ -5857,10 +5858,13 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
sfx_items++;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
- /* Determine if copy vlan item below. */
- vlan_item_src = items;
- vlan_item_dst = sfx_items++;
- vlan_item_dst->type = RTE_FLOW_ITEM_TYPE_VOID;
+ /*
+ * Copy VLAN items in case VLAN actions are performed.
+ * If there are no VLAN actions, these items will be VOID.
+ */
+ memcpy(sfx_items, items, sizeof(*sfx_items));
+ sfx_items->type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_VLAN;
+ sfx_items++;
break;
default:
break;
@@ -5877,6 +5881,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
tag_action = actions_pre++;
}
/* Prepare the actions for prefix and suffix flow. */
+ vlan_actions = false;
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
struct rte_flow_action *action_cur = NULL;
@@ -5907,16 +5912,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
break;
case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
- if (vlan_item_dst && vlan_item_src) {
- memcpy(vlan_item_dst, vlan_item_src,
- sizeof(*vlan_item_dst));
- /*
- * Convert to internal match item, it is used
- * for vlan push and set vid.
- */
- vlan_item_dst->type = (enum rte_flow_item_type)
- MLX5_RTE_FLOW_ITEM_TYPE_VLAN;
- }
+ vlan_actions = true;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
if (fm->def_policy)
@@ -5931,6 +5927,14 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
actions_sfx++ : actions_pre++;
memcpy(action_cur, actions, sizeof(struct rte_flow_action));
}
+ /* If there are no VLAN actions, convert VLAN items to VOID in suffix flow items. */
+ if (!vlan_actions) {
+ struct rte_flow_item *it = orig_sfx_items;
+
+ for (; it->type != RTE_FLOW_ITEM_TYPE_END; it++)
+ if (it->type == (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_VLAN)
+ it->type = RTE_FLOW_ITEM_TYPE_VOID;
+ }
/* Add end action to the actions. */
actions_sfx->type = RTE_FLOW_ACTION_TYPE_END;
if (priv->sh->meter_aso_en) {
@@ -6020,8 +6024,6 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
tag_action->type = (enum rte_flow_action_type)
MLX5_RTE_FLOW_ACTION_TYPE_TAG;
tag_action->conf = set_tag;
- tag_item->type = (enum rte_flow_item_type)
- MLX5_RTE_FLOW_ITEM_TYPE_TAG;
tag_item->spec = tag_item_spec;
tag_item->last = NULL;
tag_item->mask = tag_item_mask;
@@ -6849,6 +6851,19 @@ flow_meter_create_drop_flow_with_org_pattern(struct rte_eth_dev *dev,
&drop_split_info, error);
}
+static int
+flow_count_vlan_items(const struct rte_flow_item items[])
+{
+ int items_n = 0;
+
+ for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ if (items->type == RTE_FLOW_ITEM_TYPE_VLAN ||
+ items->type == (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_VLAN)
+ items_n++;
+ }
+ return items_n;
+}
+
/**
* The splitting for meter feature.
*
@@ -6904,6 +6919,7 @@ flow_create_split_meter(struct rte_eth_dev *dev,
size_t act_size;
size_t item_size;
int actions_n = 0;
+ int vlan_items_n = 0;
int ret = 0;
if (priv->mtr_en)
@@ -6963,9 +6979,11 @@ flow_create_split_meter(struct rte_eth_dev *dev,
act_size = (sizeof(struct rte_flow_action) *
(actions_n + METER_PREFIX_ACTION)) +
sizeof(struct mlx5_rte_flow_action_set_tag);
- /* Suffix items: tag, vlan, port id, end. */
-#define METER_SUFFIX_ITEM 4
- item_size = sizeof(struct rte_flow_item) * METER_SUFFIX_ITEM +
+ /* Flow can have multiple VLAN items. Account for them in suffix items. */
+ vlan_items_n = flow_count_vlan_items(items);
+ /* Suffix items: tag, [vlans], port id, end. */
+#define METER_SUFFIX_ITEM 3
+ item_size = sizeof(struct rte_flow_item) * (METER_SUFFIX_ITEM + vlan_items_n) +
sizeof(struct mlx5_rte_flow_item_tag) * 2;
sfx_actions = mlx5_malloc(MLX5_MEM_ZERO, (act_size + item_size),
0, SOCKET_ID_ANY);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.793788586 +0800
+++ 0057-net-mlx5-fix-VLAN-handling-in-meter-split.patch 2024-04-13 20:43:04.997753931 +0800
@@ -1 +1 @@
-From 5d2301a222d77e7bac3a085aa17f8ef7a3af7ffe Mon Sep 17 00:00:00 2001
+From 0198b11a1109696dff05344fde67fb7a4f7e26d7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5d2301a222d77e7bac3a085aa17f8ef7a3af7ffe ]
@@ -24 +26,0 @@
-Cc: stable@dpdk.org
@@ -33 +35 @@
-index c7d70b8c7b..f8943a60be 100644
+index 3e31945f99..ee210549e7 100644
@@ -36 +38 @@
-@@ -5707,8 +5707,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -5817,8 +5817,8 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -47 +49 @@
-@@ -5725,6 +5725,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -5835,6 +5835,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -55 +57 @@
-@@ -5747,10 +5748,13 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -5857,10 +5858,13 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -73 +75 @@
-@@ -5767,6 +5771,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -5877,6 +5881,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -81 +83 @@
-@@ -5797,16 +5802,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -5907,16 +5912,7 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -99 +101 @@
-@@ -5821,6 +5817,14 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -5931,6 +5927,14 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -114 +116 @@
-@@ -5910,8 +5914,6 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
+@@ -6020,8 +6024,6 @@ flow_meter_split_prep(struct rte_eth_dev *dev,
@@ -123 +125 @@
-@@ -6739,6 +6741,19 @@ flow_meter_create_drop_flow_with_org_pattern(struct rte_eth_dev *dev,
+@@ -6849,6 +6851,19 @@ flow_meter_create_drop_flow_with_org_pattern(struct rte_eth_dev *dev,
@@ -143 +145 @@
-@@ -6794,6 +6809,7 @@ flow_create_split_meter(struct rte_eth_dev *dev,
+@@ -6904,6 +6919,7 @@ flow_create_split_meter(struct rte_eth_dev *dev,
@@ -151 +153 @@
-@@ -6853,9 +6869,11 @@ flow_create_split_meter(struct rte_eth_dev *dev,
+@@ -6963,9 +6979,11 @@ flow_create_split_meter(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix parameters verification in HWS table create' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (55 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix VLAN handling in meter split' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:48 ` patch 'net/mlx5: fix flow counter cache starvation' " Xueming Li
` (66 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b90c42e4ff19cd494711152d52dabd0abdce4692
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b90c42e4ff19cd494711152d52dabd0abdce4692 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Wed, 28 Feb 2024 15:33:10 +0200
Subject: [PATCH] net/mlx5: fix parameters verification in HWS table create
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 075796dcc692b50a7b3e89cf969210171d09a06c ]
Modified the conditionals in `flow_hw_table_create()` to use bitwise
AND instead of equality checks when assessing
`table_cfg->attr->specialize` bitmask.
This will allow for greater flexibility as the bitmask may encapsulate
multiple flags.
The patch maintains the previous behavior with single flag values,
while providing support for multiple flags.
Fixes: 240b77cfcba5 ("net/mlx5: enable hint in async flow table")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 4927975461..93035c8548 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4368,12 +4368,23 @@ flow_hw_table_create(struct rte_eth_dev *dev,
matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
/* Parse hints information. */
if (attr->specialize) {
- if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
- matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE;
- else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
- matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT;
- else
- DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize);
+ uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG |
+ RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
+
+ if ((attr->specialize & val) == val) {
+ DRV_LOG(INFO, "Invalid hint value %x",
+ attr->specialize);
+ rte_errno = EINVAL;
+ goto it_error;
+ }
+ if (attr->specialize &
+ RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
+ matcher_attr.optimize_flow_src =
+ MLX5DR_MATCHER_FLOW_SRC_WIRE;
+ else if (attr->specialize &
+ RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
+ matcher_attr.optimize_flow_src =
+ MLX5DR_MATCHER_FLOW_SRC_VPORT;
}
/* Build the item template. */
for (i = 0; i < nb_item_templates; i++) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.830815337 +0800
+++ 0058-net-mlx5-fix-parameters-verification-in-HWS-table-cr.patch 2024-04-13 20:43:04.997753931 +0800
@@ -1 +1 @@
-From 075796dcc692b50a7b3e89cf969210171d09a06c Mon Sep 17 00:00:00 2001
+From b90c42e4ff19cd494711152d52dabd0abdce4692 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 075796dcc692b50a7b3e89cf969210171d09a06c ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 74a3133c13..76783145cf 100644
+index 4927975461..93035c8548 100644
@@ -27 +29 @@
-@@ -4397,12 +4397,23 @@ flow_hw_table_create(struct rte_eth_dev *dev,
+@@ -4368,12 +4368,23 @@ flow_hw_table_create(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix flow counter cache starvation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (56 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix parameters verification in HWS table create' " Xueming Li
@ 2024-04-13 12:48 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix counters map in bonding mode' " Xueming Li
` (65 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:48 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Ori Kam, Bing Zhao, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=091234f3cb541f513fb086fb511cb0ce390e7d82
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 091234f3cb541f513fb086fb511cb0ce390e7d82 Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Wed, 28 Feb 2024 20:06:06 +0100
Subject: [PATCH] net/mlx5: fix flow counter cache starvation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d755221b77c29549be4025e547cf907ad3a0abcf ]
mlx5 PMD maintains a global counter pool and per-queue counter cache,
which are used to allocate COUNT flow action objects.
Whenever an empty cache is accessed, it is replenished
with a pre-defined number of counters.
If number of configured counters was sufficiently small, then
it might have happened that caches associated with some queues
could get starved because all counters were fetched on other queues.
This patch fixes that by disabling cache at runtime
if number of configured counters is not sufficient to avoid
such starvation.
Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 6 +--
drivers/net/mlx5/mlx5_hws_cnt.c | 72 ++++++++++++++++++++++++---------
drivers/net/mlx5/mlx5_hws_cnt.h | 25 +++++++++---
3 files changed, 74 insertions(+), 29 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 93035c8548..7cef9bd3ff 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3123,8 +3123,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
break;
/* Fall-through. */
case RTE_FLOW_ACTION_TYPE_COUNT:
- /* If the port is engaged in resource sharing, do not use queue cache. */
- cnt_queue = mlx5_hws_cnt_is_pool_shared(priv) ? NULL : &queue;
+ cnt_queue = mlx5_hws_cnt_get_queue(priv, &queue);
ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &cnt_id, age_idx);
if (ret != 0)
return ret;
@@ -3722,8 +3721,7 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue,
}
return;
}
- /* If the port is engaged in resource sharing, do not use queue cache. */
- cnt_queue = mlx5_hws_cnt_is_pool_shared(priv) ? NULL : &queue;
+ cnt_queue = mlx5_hws_cnt_get_queue(priv, &queue);
/* Put the counter first to reduce the race risk in BG thread. */
mlx5_hws_cnt_pool_put(priv->hws_cpool, cnt_queue, &flow->cnt_id);
flow->cnt_id = 0;
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c
index a3bea94811..c31f2f380b 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.c
+++ b/drivers/net/mlx5/mlx5_hws_cnt.c
@@ -340,6 +340,55 @@ mlx5_hws_cnt_pool_deinit(struct mlx5_hws_cnt_pool * const cntp)
mlx5_free(cntp);
}
+static bool
+mlx5_hws_cnt_should_enable_cache(const struct mlx5_hws_cnt_pool_cfg *pcfg,
+ const struct mlx5_hws_cache_param *ccfg)
+{
+ /*
+ * Enable cache if and only if there are enough counters requested
+ * to populate all of the caches.
+ */
+ return pcfg->request_num >= ccfg->q_num * ccfg->size;
+}
+
+static struct mlx5_hws_cnt_pool_caches *
+mlx5_hws_cnt_cache_init(const struct mlx5_hws_cnt_pool_cfg *pcfg,
+ const struct mlx5_hws_cache_param *ccfg)
+{
+ struct mlx5_hws_cnt_pool_caches *cache;
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ uint32_t qidx;
+
+ /* If counter pool is big enough, setup the counter pool cache. */
+ cache = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO,
+ sizeof(*cache) +
+ sizeof(((struct mlx5_hws_cnt_pool_caches *)0)->qcache[0])
+ * ccfg->q_num, 0, SOCKET_ID_ANY);
+ if (cache == NULL)
+ return NULL;
+ /* Store the necessary cache parameters. */
+ cache->fetch_sz = ccfg->fetch_sz;
+ cache->preload_sz = ccfg->preload_sz;
+ cache->threshold = ccfg->threshold;
+ cache->q_num = ccfg->q_num;
+ for (qidx = 0; qidx < ccfg->q_num; qidx++) {
+ snprintf(mz_name, sizeof(mz_name), "%s_qc/%x", pcfg->name, qidx);
+ cache->qcache[qidx] = rte_ring_create(mz_name, ccfg->size,
+ SOCKET_ID_ANY,
+ RING_F_SP_ENQ | RING_F_SC_DEQ |
+ RING_F_EXACT_SZ);
+ if (cache->qcache[qidx] == NULL)
+ goto error;
+ }
+ return cache;
+
+error:
+ while (qidx--)
+ rte_ring_free(cache->qcache[qidx]);
+ mlx5_free(cache);
+ return NULL;
+}
+
static struct mlx5_hws_cnt_pool *
mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh,
const struct mlx5_hws_cnt_pool_cfg *pcfg,
@@ -348,7 +397,6 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh,
char mz_name[RTE_MEMZONE_NAMESIZE];
struct mlx5_hws_cnt_pool *cntp;
uint64_t cnt_num = 0;
- uint32_t qidx;
MLX5_ASSERT(pcfg);
MLX5_ASSERT(ccfg);
@@ -360,17 +408,6 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh,
cntp->cfg = *pcfg;
if (cntp->cfg.host_cpool)
return cntp;
- cntp->cache = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO,
- sizeof(*cntp->cache) +
- sizeof(((struct mlx5_hws_cnt_pool_caches *)0)->qcache[0])
- * ccfg->q_num, 0, SOCKET_ID_ANY);
- if (cntp->cache == NULL)
- goto error;
- /* store the necessary cache parameters. */
- cntp->cache->fetch_sz = ccfg->fetch_sz;
- cntp->cache->preload_sz = ccfg->preload_sz;
- cntp->cache->threshold = ccfg->threshold;
- cntp->cache->q_num = ccfg->q_num;
if (pcfg->request_num > sh->hws_max_nb_counters) {
DRV_LOG(ERR, "Counter number %u "
"is greater than the maximum supported (%u).",
@@ -418,13 +455,10 @@ mlx5_hws_cnt_pool_init(struct mlx5_dev_ctx_shared *sh,
DRV_LOG(ERR, "failed to create reuse list ring");
goto error;
}
- for (qidx = 0; qidx < ccfg->q_num; qidx++) {
- snprintf(mz_name, sizeof(mz_name), "%s_qc/%x", pcfg->name, qidx);
- cntp->cache->qcache[qidx] = rte_ring_create(mz_name, ccfg->size,
- SOCKET_ID_ANY,
- RING_F_SP_ENQ | RING_F_SC_DEQ |
- RING_F_EXACT_SZ);
- if (cntp->cache->qcache[qidx] == NULL)
+ /* Allocate counter cache only if needed. */
+ if (mlx5_hws_cnt_should_enable_cache(pcfg, ccfg)) {
+ cntp->cache = mlx5_hws_cnt_cache_init(pcfg, ccfg);
+ if (cntp->cache == NULL)
goto error;
}
/* Initialize the time for aging-out calculation. */
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h
index 585b5a83ad..e00596088f 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.h
+++ b/drivers/net/mlx5/mlx5_hws_cnt.h
@@ -557,19 +557,32 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue,
}
/**
- * Check if counter pool allocated for HWS is shared between ports.
+ * Decide if the given queue can be used to perform counter allocation/deallcation
+ * based on counter configuration
*
* @param[in] priv
* Pointer to the port private data structure.
+ * @param[in] queue
+ * Pointer to the queue index.
*
* @return
- * True if counter pools is shared between ports. False otherwise.
+ * @p queue if cache related to the queue can be used. NULL otherwise.
*/
-static __rte_always_inline bool
-mlx5_hws_cnt_is_pool_shared(struct mlx5_priv *priv)
+static __rte_always_inline uint32_t *
+mlx5_hws_cnt_get_queue(struct mlx5_priv *priv, uint32_t *queue)
{
- return priv && priv->hws_cpool &&
- (priv->shared_refcnt || priv->hws_cpool->cfg.host_cpool != NULL);
+ if (priv && priv->hws_cpool) {
+ /* Do not use queue cache if counter pool is shared. */
+ if (priv->shared_refcnt || priv->hws_cpool->cfg.host_cpool != NULL)
+ return NULL;
+ /* Do not use queue cache if counter cache is disabled. */
+ if (priv->hws_cpool->cache == NULL)
+ return NULL;
+ return queue;
+ }
+ /* This case should not be reached if counter pool was successfully configured. */
+ MLX5_ASSERT(false);
+ return NULL;
}
static __rte_always_inline unsigned int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.868225888 +0800
+++ 0059-net-mlx5-fix-flow-counter-cache-starvation.patch 2024-04-13 20:43:04.997753931 +0800
@@ -1 +1 @@
-From d755221b77c29549be4025e547cf907ad3a0abcf Mon Sep 17 00:00:00 2001
+From 091234f3cb541f513fb086fb511cb0ce390e7d82 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d755221b77c29549be4025e547cf907ad3a0abcf ]
@@ -20 +22,0 @@
-Cc: stable@dpdk.org
@@ -32 +34 @@
-index 9620b7f576..c1dbdc5f19 100644
+index 93035c8548..7cef9bd3ff 100644
@@ -35 +37 @@
-@@ -3131,8 +3131,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
+@@ -3123,8 +3123,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
@@ -45 +47 @@
-@@ -3776,8 +3775,7 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue,
+@@ -3722,8 +3721,7 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix counters map in bonding mode' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (57 preceding siblings ...)
2024-04-13 12:48 ` patch 'net/mlx5: fix flow counter cache starvation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix flow action template expansion' " Xueming Li
` (64 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Bing Zhao; +Cc: Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0c31d1220ffaff4742154c3c957ab5305e5f5c3a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0c31d1220ffaff4742154c3c957ab5305e5f5c3a Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Thu, 29 Feb 2024 11:34:56 +0200
Subject: [PATCH] net/mlx5: fix counters map in bonding mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a687c3e658c2d889052089af8340bc0b9299c856 ]
In the HW-LAG mode, there is only one mlx5 IB device with 2 ETH
interfaces. In theory, the settings on both ports should be the same.
But in the real life, some inconsistent settings may be done by the
user and the PMD is not aware of this.
In the previous implementation, the xstats map was generated from the
information fetched on the 1st port of a bonding interface. If the
2nd port had a different settings, the number and the order of the
counters may differ from that of the 1st one. The ioctl() call may
corrupt the user buffers (copy_to_user) and cause a crash.
The commit will change the map between the driver counters to the
PMD user counters.
1. Switch the inner and outer loop to speed up the initialization
time AMAP - since there will be >300 counters returned from the
driver.
2. Generate an unique map for both ports in LAG mode.
a. Scan the 1st port and find the supported counters' strings,
then add to the map.
b. In bonding, scan the 2nd port and find the strings. If one is
already in the map, use the index. Or append to the next free
slot.
c. Append the device counters that needs to be fetched via sysfs
or Devx command. This kind of counter(s) is unique per IB
device.
After querying the statistics from the driver, the value will be read
from the proper offset in the "struct ethtool_stats" and then added
into the output array based on the map information. In bonding mode,
the statistics from both ports will be accumulated if the counters
are valid on both ports.
Compared to the system call or Devx command, the overhead introduced
by the extra index comparison is light and should not cause a
significant degradation.
The application should ensure that the port settings should not be
changed out of the DPDK application dynamically in most cases. Or
else the change cannot be notified and the counters map might not
be valid when the number doesn't change but the counters set had
changed. A device restart will help to re-initialize the map from
scrath.
Fixes: 7ed15acdcd69 ("net/mlx5: improve xstats of bonding port")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 249 +++++++++++++++-------
drivers/net/mlx5/mlx5.h | 15 +-
drivers/net/mlx5/mlx5_stats.c | 58 +++--
drivers/net/mlx5/windows/mlx5_ethdev_os.c | 22 +-
4 files changed, 242 insertions(+), 102 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index dd5a0c546d..0ee8c58ba7 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -1286,13 +1286,16 @@ _mlx5_os_read_dev_counters(struct rte_eth_dev *dev, int pf, uint64_t *stats)
struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
unsigned int i;
struct ifreq ifr;
- unsigned int stats_sz = xstats_ctrl->stats_n * sizeof(uint64_t);
+ unsigned int max_stats_n = RTE_MAX(xstats_ctrl->stats_n, xstats_ctrl->stats_n_2nd);
+ unsigned int stats_sz = max_stats_n * sizeof(uint64_t);
unsigned char et_stat_buf[sizeof(struct ethtool_stats) + stats_sz];
struct ethtool_stats *et_stats = (struct ethtool_stats *)et_stat_buf;
int ret;
+ uint16_t i_idx, o_idx;
et_stats->cmd = ETHTOOL_GSTATS;
- et_stats->n_stats = xstats_ctrl->stats_n;
+ /* Pass the maximum value, the driver may ignore this. */
+ et_stats->n_stats = max_stats_n;
ifr.ifr_data = (caddr_t)et_stats;
if (pf >= 0)
ret = mlx5_ifreq_by_ifname(priv->sh->bond.ports[pf].ifname,
@@ -1305,21 +1308,34 @@ _mlx5_os_read_dev_counters(struct rte_eth_dev *dev, int pf, uint64_t *stats)
dev->data->port_id);
return ret;
}
- for (i = 0; i != xstats_ctrl->mlx5_stats_n; ++i) {
- if (xstats_ctrl->info[i].dev)
- continue;
- stats[i] += (uint64_t)
- et_stats->data[xstats_ctrl->dev_table_idx[i]];
+ if (pf <= 0) {
+ for (i = 0; i != xstats_ctrl->mlx5_stats_n; i++) {
+ i_idx = xstats_ctrl->dev_table_idx[i];
+ if (i_idx == UINT16_MAX || xstats_ctrl->info[i].dev)
+ continue;
+ o_idx = xstats_ctrl->xstats_o_idx[i];
+ stats[o_idx] += (uint64_t)et_stats->data[i_idx];
+ }
+ } else {
+ for (i = 0; i != xstats_ctrl->mlx5_stats_n; i++) {
+ i_idx = xstats_ctrl->dev_table_idx_2nd[i];
+ if (i_idx == UINT16_MAX)
+ continue;
+ o_idx = xstats_ctrl->xstats_o_idx_2nd[i];
+ stats[o_idx] += (uint64_t)et_stats->data[i_idx];
+ }
}
return 0;
}
-/**
+/*
* Read device counters.
*
* @param dev
* Pointer to Ethernet device.
- * @param[out] stats
+ * @param bond_master
+ * Indicate if the device is a bond master.
+ * @param stats
* Counters table output buffer.
*
* @return
@@ -1327,7 +1343,7 @@ _mlx5_os_read_dev_counters(struct rte_eth_dev *dev, int pf, uint64_t *stats)
* rte_errno is set.
*/
int
-mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats)
+mlx5_os_read_dev_counters(struct rte_eth_dev *dev, bool bond_master, uint64_t *stats)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
@@ -1335,7 +1351,7 @@ mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats)
memset(stats, 0, sizeof(*stats) * xstats_ctrl->mlx5_stats_n);
/* Read ifreq counters. */
- if (priv->master && priv->pf_bond >= 0) {
+ if (bond_master) {
/* Sum xstats from bonding device member ports. */
for (i = 0; i < priv->sh->bond.n_port; i++) {
ret = _mlx5_os_read_dev_counters(dev, i, stats);
@@ -1347,13 +1363,17 @@ mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats)
if (ret)
return ret;
}
- /* Read IB counters. */
- for (i = 0; i != xstats_ctrl->mlx5_stats_n; ++i) {
+ /*
+ * Read IB counters.
+ * The counters are unique per IB device but not per net IF.
+ * In bonding mode, getting the stats name only from 1 port is enough.
+ */
+ for (i = 0; i != xstats_ctrl->mlx5_stats_n; i++) {
if (!xstats_ctrl->info[i].dev)
continue;
/* return last xstats counter if fail to read. */
if (mlx5_os_read_dev_stat(priv, xstats_ctrl->info[i].ctr_name,
- &stats[i]) == 0)
+ &stats[i]) == 0)
xstats_ctrl->xstats[i] = stats[i];
else
stats[i] = xstats_ctrl->xstats[i];
@@ -1361,18 +1381,24 @@ mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats)
return ret;
}
-/**
+/*
* Query the number of statistics provided by ETHTOOL.
*
* @param dev
* Pointer to Ethernet device.
+ * @param bond_master
+ * Indicate if the device is a bond master.
+ * @param n_stats
+ * Pointer to number of stats to store.
+ * @param n_stats_sec
+ * Pointer to number of stats to store for the 2nd port of the bond.
*
* @return
- * Number of statistics on success, negative errno value otherwise and
- * rte_errno is set.
+ * 0 on success, negative errno value otherwise and rte_errno is set.
*/
int
-mlx5_os_get_stats_n(struct rte_eth_dev *dev)
+mlx5_os_get_stats_n(struct rte_eth_dev *dev, bool bond_master,
+ uint16_t *n_stats, uint16_t *n_stats_sec)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct ethtool_drvinfo drvinfo;
@@ -1381,18 +1407,34 @@ mlx5_os_get_stats_n(struct rte_eth_dev *dev)
drvinfo.cmd = ETHTOOL_GDRVINFO;
ifr.ifr_data = (caddr_t)&drvinfo;
- if (priv->master && priv->pf_bond >= 0)
- /* Bonding PF. */
+ /* Bonding PFs. */
+ if (bond_master) {
ret = mlx5_ifreq_by_ifname(priv->sh->bond.ports[0].ifname,
SIOCETHTOOL, &ifr);
- else
+ if (ret) {
+ DRV_LOG(WARNING, "bonding port %u unable to query number of"
+ " statistics for the 1st slave, %d", PORT_ID(priv), ret);
+ return ret;
+ }
+ *n_stats = drvinfo.n_stats;
+ ret = mlx5_ifreq_by_ifname(priv->sh->bond.ports[1].ifname,
+ SIOCETHTOOL, &ifr);
+ if (ret) {
+ DRV_LOG(WARNING, "bonding port %u unable to query number of"
+ " statistics for the 2nd slave, %d", PORT_ID(priv), ret);
+ return ret;
+ }
+ *n_stats_sec = drvinfo.n_stats;
+ } else {
ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
- if (ret) {
- DRV_LOG(WARNING, "port %u unable to query number of statistics",
- dev->data->port_id);
- return ret;
+ if (ret) {
+ DRV_LOG(WARNING, "port %u unable to query number of statistics",
+ PORT_ID(priv));
+ return ret;
+ }
+ *n_stats = drvinfo.n_stats;
}
- return drvinfo.n_stats;
+ return 0;
}
static const struct mlx5_counter_ctrl mlx5_counters_init[] = {
@@ -1578,6 +1620,101 @@ static const struct mlx5_counter_ctrl mlx5_counters_init[] = {
static const unsigned int xstats_n = RTE_DIM(mlx5_counters_init);
+static int
+mlx5_os_get_stats_strings(struct rte_eth_dev *dev, bool bond_master,
+ struct ethtool_gstrings *strings,
+ uint32_t stats_n, uint32_t stats_n_2nd)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
+ struct ifreq ifr;
+ int ret;
+ uint32_t i, j, idx;
+
+ /* Ensure no out of bounds access before. */
+ MLX5_ASSERT(xstats_n <= MLX5_MAX_XSTATS);
+ strings->cmd = ETHTOOL_GSTRINGS;
+ strings->string_set = ETH_SS_STATS;
+ strings->len = stats_n;
+ ifr.ifr_data = (caddr_t)strings;
+ if (bond_master)
+ ret = mlx5_ifreq_by_ifname(priv->sh->bond.ports[0].ifname,
+ SIOCETHTOOL, &ifr);
+ else
+ ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
+ if (ret) {
+ DRV_LOG(WARNING, "port %u unable to get statistic names with %d",
+ PORT_ID(priv), ret);
+ return ret;
+ }
+ /* Reorganize the orders to reduce the iterations. */
+ for (j = 0; j < xstats_n; j++) {
+ xstats_ctrl->dev_table_idx[j] = UINT16_MAX;
+ for (i = 0; i < stats_n; i++) {
+ const char *curr_string =
+ (const char *)&strings->data[i * ETH_GSTRING_LEN];
+
+ if (!strcmp(mlx5_counters_init[j].ctr_name, curr_string)) {
+ idx = xstats_ctrl->mlx5_stats_n++;
+ xstats_ctrl->dev_table_idx[j] = i;
+ xstats_ctrl->xstats_o_idx[j] = idx;
+ xstats_ctrl->info[idx] = mlx5_counters_init[j];
+ }
+ }
+ }
+ if (!bond_master) {
+ /* Add dev counters, unique per IB device. */
+ for (j = 0; j != xstats_n; j++) {
+ if (mlx5_counters_init[j].dev) {
+ idx = xstats_ctrl->mlx5_stats_n++;
+ xstats_ctrl->info[idx] = mlx5_counters_init[j];
+ xstats_ctrl->hw_stats[idx] = 0;
+ }
+ }
+ return 0;
+ }
+
+ strings->len = stats_n_2nd;
+ ret = mlx5_ifreq_by_ifname(priv->sh->bond.ports[1].ifname,
+ SIOCETHTOOL, &ifr);
+ if (ret) {
+ DRV_LOG(WARNING, "port %u unable to get statistic names for 2nd slave with %d",
+ PORT_ID(priv), ret);
+ return ret;
+ }
+ /* The 2nd slave port may have a different strings set, based on the configuration. */
+ for (j = 0; j != xstats_n; j++) {
+ xstats_ctrl->dev_table_idx_2nd[j] = UINT16_MAX;
+ for (i = 0; i != stats_n_2nd; i++) {
+ const char *curr_string =
+ (const char *)&strings->data[i * ETH_GSTRING_LEN];
+
+ if (!strcmp(mlx5_counters_init[j].ctr_name, curr_string)) {
+ xstats_ctrl->dev_table_idx_2nd[j] = i;
+ if (xstats_ctrl->dev_table_idx[j] != UINT16_MAX) {
+ /* Already mapped in the 1st slave port. */
+ idx = xstats_ctrl->xstats_o_idx[j];
+ xstats_ctrl->xstats_o_idx_2nd[j] = idx;
+ } else {
+ /* Append the new items to the end of the map. */
+ idx = xstats_ctrl->mlx5_stats_n++;
+ xstats_ctrl->xstats_o_idx_2nd[j] = idx;
+ xstats_ctrl->info[idx] = mlx5_counters_init[j];
+ }
+ }
+ }
+ }
+ /* Dev counters are always at the last now. */
+ for (j = 0; j != xstats_n; j++) {
+ if (mlx5_counters_init[j].dev) {
+ idx = xstats_ctrl->mlx5_stats_n++;
+ xstats_ctrl->info[idx] = mlx5_counters_init[j];
+ xstats_ctrl->hw_stats[idx] = 0;
+ }
+ }
+ return 0;
+}
+
/**
* Init the structures to read device counters.
*
@@ -1590,76 +1727,44 @@ mlx5_os_stats_init(struct rte_eth_dev *dev)
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
struct mlx5_stats_ctrl *stats_ctrl = &priv->stats_ctrl;
- unsigned int i;
- unsigned int j;
- struct ifreq ifr;
struct ethtool_gstrings *strings = NULL;
- unsigned int dev_stats_n;
+ uint16_t dev_stats_n = 0;
+ uint16_t dev_stats_n_2nd = 0;
+ unsigned int max_stats_n;
unsigned int str_sz;
int ret;
+ bool bond_master = (priv->master && priv->pf_bond >= 0);
/* So that it won't aggregate for each init. */
xstats_ctrl->mlx5_stats_n = 0;
- ret = mlx5_os_get_stats_n(dev);
+ ret = mlx5_os_get_stats_n(dev, bond_master, &dev_stats_n, &dev_stats_n_2nd);
if (ret < 0) {
DRV_LOG(WARNING, "port %u no extended statistics available",
dev->data->port_id);
return;
}
- dev_stats_n = ret;
+ max_stats_n = RTE_MAX(dev_stats_n, dev_stats_n_2nd);
/* Allocate memory to grab stat names and values. */
- str_sz = dev_stats_n * ETH_GSTRING_LEN;
+ str_sz = max_stats_n * ETH_GSTRING_LEN;
strings = (struct ethtool_gstrings *)
mlx5_malloc(0, str_sz + sizeof(struct ethtool_gstrings), 0,
SOCKET_ID_ANY);
if (!strings) {
DRV_LOG(WARNING, "port %u unable to allocate memory for xstats",
- dev->data->port_id);
+ dev->data->port_id);
return;
}
- strings->cmd = ETHTOOL_GSTRINGS;
- strings->string_set = ETH_SS_STATS;
- strings->len = dev_stats_n;
- ifr.ifr_data = (caddr_t)strings;
- if (priv->master && priv->pf_bond >= 0)
- /* Bonding master. */
- ret = mlx5_ifreq_by_ifname(priv->sh->bond.ports[0].ifname,
- SIOCETHTOOL, &ifr);
- else
- ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
- if (ret) {
- DRV_LOG(WARNING, "port %u unable to get statistic names",
+ ret = mlx5_os_get_stats_strings(dev, bond_master, strings,
+ dev_stats_n, dev_stats_n_2nd);
+ if (ret < 0) {
+ DRV_LOG(WARNING, "port %u failed to get the stats strings",
dev->data->port_id);
goto free;
}
- for (i = 0; i != dev_stats_n; ++i) {
- const char *curr_string = (const char *)
- &strings->data[i * ETH_GSTRING_LEN];
-
- for (j = 0; j != xstats_n; ++j) {
- if (!strcmp(mlx5_counters_init[j].ctr_name,
- curr_string)) {
- unsigned int idx = xstats_ctrl->mlx5_stats_n++;
-
- xstats_ctrl->dev_table_idx[idx] = i;
- xstats_ctrl->info[idx] = mlx5_counters_init[j];
- break;
- }
- }
- }
- /* Add dev counters. */
- MLX5_ASSERT(xstats_ctrl->mlx5_stats_n <= MLX5_MAX_XSTATS);
- for (i = 0; i != xstats_n; ++i) {
- if (mlx5_counters_init[i].dev) {
- unsigned int idx = xstats_ctrl->mlx5_stats_n++;
-
- xstats_ctrl->info[idx] = mlx5_counters_init[i];
- xstats_ctrl->hw_stats[idx] = 0;
- }
- }
xstats_ctrl->stats_n = dev_stats_n;
+ xstats_ctrl->stats_n_2nd = dev_stats_n_2nd;
/* Copy to base at first time. */
- ret = mlx5_os_read_dev_counters(dev, xstats_ctrl->base);
+ ret = mlx5_os_read_dev_counters(dev, bond_master, xstats_ctrl->base);
if (ret)
DRV_LOG(ERR, "port %u cannot read device counters: %s",
dev->data->port_id, strerror(rte_errno));
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 263ebead7f..153374802a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -263,14 +263,22 @@ struct mlx5_counter_ctrl {
struct mlx5_xstats_ctrl {
/* Number of device stats. */
uint16_t stats_n;
+ /* Number of device stats, for the 2nd port in bond. */
+ uint16_t stats_n_2nd;
/* Number of device stats identified by PMD. */
- uint16_t mlx5_stats_n;
+ uint16_t mlx5_stats_n;
/* Index in the device counters table. */
uint16_t dev_table_idx[MLX5_MAX_XSTATS];
+ /* Index in the output table. */
+ uint16_t xstats_o_idx[MLX5_MAX_XSTATS];
uint64_t base[MLX5_MAX_XSTATS];
uint64_t xstats[MLX5_MAX_XSTATS];
uint64_t hw_stats[MLX5_MAX_XSTATS];
struct mlx5_counter_ctrl info[MLX5_MAX_XSTATS];
+ /* Index in the device counters table, for the 2nd port in bond. */
+ uint16_t dev_table_idx_2nd[MLX5_MAX_XSTATS];
+ /* Index in the output table, for the 2nd port in bond. */
+ uint16_t xstats_o_idx_2nd[MLX5_MAX_XSTATS];
};
struct mlx5_stats_ctrl {
@@ -2131,8 +2139,9 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
struct rte_dev_eeprom_info *info);
int mlx5_os_read_dev_stat(struct mlx5_priv *priv,
const char *ctr_name, uint64_t *stat);
-int mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats);
-int mlx5_os_get_stats_n(struct rte_eth_dev *dev);
+int mlx5_os_read_dev_counters(struct rte_eth_dev *dev, bool bond_master, uint64_t *stats);
+int mlx5_os_get_stats_n(struct rte_eth_dev *dev, bool bond_master,
+ uint16_t *n_stats, uint16_t *n_stats_sec);
void mlx5_os_stats_init(struct rte_eth_dev *dev);
int mlx5_get_flag_dropless_rq(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c
index 615e1d073d..f4ac58e2f9 100644
--- a/drivers/net/mlx5/mlx5_stats.c
+++ b/drivers/net/mlx5/mlx5_stats.c
@@ -39,24 +39,36 @@ mlx5_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *stats,
unsigned int n)
{
struct mlx5_priv *priv = dev->data->dev_private;
- unsigned int i;
- uint64_t counters[n];
+ uint64_t counters[MLX5_MAX_XSTATS];
struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
+ unsigned int i;
+ uint16_t stats_n = 0;
+ uint16_t stats_n_2nd = 0;
uint16_t mlx5_stats_n = xstats_ctrl->mlx5_stats_n;
+ bool bond_master = (priv->master && priv->pf_bond >= 0);
if (n >= mlx5_stats_n && stats) {
- int stats_n;
int ret;
- stats_n = mlx5_os_get_stats_n(dev);
- if (stats_n < 0)
- return stats_n;
- if (xstats_ctrl->stats_n != stats_n)
+ ret = mlx5_os_get_stats_n(dev, bond_master, &stats_n, &stats_n_2nd);
+ if (ret < 0)
+ return ret;
+ /*
+ * The number of statistics fetched via "ETH_SS_STATS" may vary because
+ * of the port configuration each time. This is also true between 2
+ * ports. There might be a case that the numbers are the same even if
+ * configurations are different.
+ * It is not recommended to change the configuration without using
+ * RTE API. The port(traffic) restart may trigger another initialization
+ * to make sure the map are correct.
+ */
+ if (xstats_ctrl->stats_n != stats_n ||
+ (bond_master && xstats_ctrl->stats_n_2nd != stats_n_2nd))
mlx5_os_stats_init(dev);
- ret = mlx5_os_read_dev_counters(dev, counters);
- if (ret)
+ ret = mlx5_os_read_dev_counters(dev, bond_master, counters);
+ if (ret < 0)
return ret;
- for (i = 0; i != mlx5_stats_n; ++i) {
+ for (i = 0; i != mlx5_stats_n; i++) {
stats[i].id = i;
if (xstats_ctrl->info[i].dev) {
uint64_t wrap_n;
@@ -225,30 +237,32 @@ mlx5_xstats_reset(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_xstats_ctrl *xstats_ctrl = &priv->xstats_ctrl;
- int stats_n;
unsigned int i;
uint64_t *counters;
int ret;
+ uint16_t stats_n = 0;
+ uint16_t stats_n_2nd = 0;
+ bool bond_master = (priv->master && priv->pf_bond >= 0);
- stats_n = mlx5_os_get_stats_n(dev);
- if (stats_n < 0) {
+ ret = mlx5_os_get_stats_n(dev, bond_master, &stats_n, &stats_n_2nd);
+ if (ret < 0) {
DRV_LOG(ERR, "port %u cannot get stats: %s", dev->data->port_id,
- strerror(-stats_n));
- return stats_n;
+ strerror(-ret));
+ return ret;
}
- if (xstats_ctrl->stats_n != stats_n)
+ if (xstats_ctrl->stats_n != stats_n ||
+ (bond_master && xstats_ctrl->stats_n_2nd != stats_n_2nd))
mlx5_os_stats_init(dev);
- counters = mlx5_malloc(MLX5_MEM_SYS, sizeof(*counters) *
- xstats_ctrl->mlx5_stats_n, 0,
- SOCKET_ID_ANY);
+ /* Considering to use stack directly. */
+ counters = mlx5_malloc(MLX5_MEM_SYS, sizeof(*counters) * xstats_ctrl->mlx5_stats_n,
+ 0, SOCKET_ID_ANY);
if (!counters) {
- DRV_LOG(WARNING, "port %u unable to allocate memory for xstats "
- "counters",
+ DRV_LOG(WARNING, "port %u unable to allocate memory for xstats counters",
dev->data->port_id);
rte_errno = ENOMEM;
return -rte_errno;
}
- ret = mlx5_os_read_dev_counters(dev, counters);
+ ret = mlx5_os_read_dev_counters(dev, bond_master, counters);
if (ret) {
DRV_LOG(ERR, "port %u cannot read device counters: %s",
dev->data->port_id, strerror(rte_errno));
diff --git a/drivers/net/mlx5/windows/mlx5_ethdev_os.c b/drivers/net/mlx5/windows/mlx5_ethdev_os.c
index a31e1b5494..49f750be68 100644
--- a/drivers/net/mlx5/windows/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/windows/mlx5_ethdev_os.c
@@ -178,20 +178,29 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
return -ENOTSUP;
}
-/**
+/*
* Query the number of statistics provided by ETHTOOL.
*
* @param dev
* Pointer to Ethernet device.
+ * @param bond_master
+ * Indicate if the device is a bond master.
+ * @param n_stats
+ * Pointer to number of stats to store.
+ * @param n_stats_sec
+ * Pointer to number of stats to store for the 2nd port of the bond.
*
* @return
- * Number of statistics on success, negative errno value otherwise and
- * rte_errno is set.
+ * 0 on success, negative errno value otherwise and rte_errno is set.
*/
int
-mlx5_os_get_stats_n(struct rte_eth_dev *dev)
+mlx5_os_get_stats_n(struct rte_eth_dev *dev, bool bond_master,
+ uint16_t *n_stats, uint16_t *n_stats_sec)
{
RTE_SET_USED(dev);
+ RTE_SET_USED(bond_master);
+ RTE_SET_USED(n_stats);
+ RTE_SET_USED(n_stats_sec);
return -ENOTSUP;
}
@@ -221,6 +230,8 @@ mlx5_os_stats_init(struct rte_eth_dev *dev)
*
* @param dev
* Pointer to Ethernet device.
+ * @param bond_master
+ * Indicate if the device is a bond master.
* @param[out] stats
* Counters table output buffer.
*
@@ -229,9 +240,10 @@ mlx5_os_stats_init(struct rte_eth_dev *dev)
* rte_errno is set.
*/
int
-mlx5_os_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats)
+mlx5_os_read_dev_counters(struct rte_eth_dev *dev, bool bond_master, uint64_t *stats)
{
RTE_SET_USED(dev);
+ RTE_SET_USED(bond_master);
RTE_SET_USED(stats);
return -ENOTSUP;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.901405245 +0800
+++ 0060-net-mlx5-fix-counters-map-in-bonding-mode.patch 2024-04-13 20:43:05.007753918 +0800
@@ -1 +1 @@
-From a687c3e658c2d889052089af8340bc0b9299c856 Mon Sep 17 00:00:00 2001
+From 0c31d1220ffaff4742154c3c957ab5305e5f5c3a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a687c3e658c2d889052089af8340bc0b9299c856 ]
@@ -50 +52,0 @@
-Cc: stable@dpdk.org
@@ -62 +64 @@
-index 92c47a3b3d..eb47c284ec 100644
+index dd5a0c546d..0ee8c58ba7 100644
@@ -237 +239 @@
-@@ -1615,6 +1657,101 @@ static const struct mlx5_counter_ctrl mlx5_counters_init[] = {
+@@ -1578,6 +1620,101 @@ static const struct mlx5_counter_ctrl mlx5_counters_init[] = {
@@ -339 +341 @@
-@@ -1627,76 +1764,44 @@ mlx5_os_stats_init(struct rte_eth_dev *dev)
+@@ -1590,76 +1727,44 @@ mlx5_os_stats_init(struct rte_eth_dev *dev)
@@ -431 +433 @@
-index f11a0181b8..fb3df76cac 100644
+index 263ebead7f..153374802a 100644
@@ -458 +460 @@
-@@ -2182,8 +2190,9 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
+@@ -2131,8 +2139,9 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix flow action template expansion' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (58 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix counters map in bonding mode' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: remove device status check in flow creation' " Xueming Li
` (63 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a10a65c3967651721c6a09c6d554536c2b769ac5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a10a65c3967651721c6a09c6d554536c2b769ac5 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 29 Feb 2024 12:19:55 +0200
Subject: [PATCH] net/mlx5: fix flow action template expansion
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 487191742f4620392ca103c4e2a3a69491a2ea2c ]
MLX5 PMD actions template compilation may implicitly add MODIFY_HEADER
to actions list provided by application.
MLX5 actions in a template list must be arranged according to the HW
supported order.
The PMD must place new MODIFY_HEADER in the correct location relative
to existing actions.
The patch adds indirect actions list to calculation of the new
MODIFY_HEADER location.
Fixes: e26f50adbf38 ("net/mlx5: support indirect list meter mark action")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 80 +++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7cef9bd3ff..7e4ead1875 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -110,6 +110,9 @@ mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
struct mlx5_tbl_multi_pattern_ctx *mpat,
struct rte_flow_error *error);
+static __rte_always_inline enum mlx5_indirect_list_type
+flow_hw_inlist_type_get(const struct rte_flow_action *actions);
+
static __rte_always_inline int
mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type)
{
@@ -5456,6 +5459,69 @@ mlx5_decap_encap_reformat_type(const struct rte_flow_action *actions,
MLX5_FLOW_ACTION_ENCAP : MLX5_FLOW_ACTION_DECAP;
}
+enum mlx5_hw_indirect_list_relative_position {
+ MLX5_INDIRECT_LIST_POSITION_UNKNOWN = -1,
+ MLX5_INDIRECT_LIST_POSITION_BEFORE_MH = 0,
+ MLX5_INDIRECT_LIST_POSITION_AFTER_MH,
+};
+
+static enum mlx5_hw_indirect_list_relative_position
+mlx5_hw_indirect_list_mh_position(const struct rte_flow_action *action)
+{
+ const struct rte_flow_action_indirect_list *conf = action->conf;
+ enum mlx5_indirect_list_type list_type = mlx5_get_indirect_list_type(conf->handle);
+ enum mlx5_hw_indirect_list_relative_position pos = MLX5_INDIRECT_LIST_POSITION_UNKNOWN;
+ const union {
+ struct mlx5_indlst_legacy *legacy;
+ struct mlx5_hw_encap_decap_action *reformat;
+ struct rte_flow_action_list_handle *handle;
+ } h = { .handle = conf->handle};
+
+ switch (list_type) {
+ case MLX5_INDIRECT_ACTION_LIST_TYPE_LEGACY:
+ switch (h.legacy->legacy_type) {
+ case RTE_FLOW_ACTION_TYPE_AGE:
+ case RTE_FLOW_ACTION_TYPE_COUNT:
+ case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+ case RTE_FLOW_ACTION_TYPE_METER_MARK:
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ pos = MLX5_INDIRECT_LIST_POSITION_BEFORE_MH;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ pos = MLX5_INDIRECT_LIST_POSITION_AFTER_MH;
+ break;
+ default:
+ pos = MLX5_INDIRECT_LIST_POSITION_UNKNOWN;
+ break;
+ }
+ break;
+ case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR:
+ pos = MLX5_INDIRECT_LIST_POSITION_AFTER_MH;
+ break;
+ case MLX5_INDIRECT_ACTION_LIST_TYPE_REFORMAT:
+ switch (h.reformat->action_type) {
+ case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
+ case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2:
+ pos = MLX5_INDIRECT_LIST_POSITION_BEFORE_MH;
+ break;
+ case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
+ case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
+ pos = MLX5_INDIRECT_LIST_POSITION_AFTER_MH;
+ break;
+ default:
+ pos = MLX5_INDIRECT_LIST_POSITION_UNKNOWN;
+ break;
+ }
+ break;
+ default:
+ pos = MLX5_INDIRECT_LIST_POSITION_UNKNOWN;
+ break;
+ }
+ return pos;
+}
+
+#define MLX5_HW_EXPAND_MH_FAILED 0xffff
+
static inline uint16_t
flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
struct rte_flow_action masks[],
@@ -5492,6 +5558,7 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
* @see action_order_arr[]
*/
for (i = act_num - 2; (int)i >= 0; i--) {
+ enum mlx5_hw_indirect_list_relative_position pos;
enum rte_flow_action_type type = actions[i].type;
uint64_t reformat_type;
@@ -5522,6 +5589,13 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
if (actions[i - 1].type == RTE_FLOW_ACTION_TYPE_RAW_DECAP)
i--;
break;
+ case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
+ pos = mlx5_hw_indirect_list_mh_position(&actions[i]);
+ if (pos == MLX5_INDIRECT_LIST_POSITION_UNKNOWN)
+ return MLX5_HW_EXPAND_MH_FAILED;
+ if (pos == MLX5_INDIRECT_LIST_POSITION_BEFORE_MH)
+ goto insert;
+ break;
default:
i++; /* new MF inserted AFTER actions[i] */
goto insert;
@@ -6476,6 +6550,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
action_flags,
act_num,
expand_mf_num);
+ if (pos == MLX5_HW_EXPAND_MH_FAILED) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "modify header expansion failed");
+ return NULL;
+ }
act_num += expand_mf_num;
for (i = pos + expand_mf_num; i < act_num; i++)
src_off[i] += expand_mf_num;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.928512110 +0800
+++ 0061-net-mlx5-fix-flow-action-template-expansion.patch 2024-04-13 20:43:05.007753918 +0800
@@ -1 +1 @@
-From 487191742f4620392ca103c4e2a3a69491a2ea2c Mon Sep 17 00:00:00 2001
+From a10a65c3967651721c6a09c6d554536c2b769ac5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 487191742f4620392ca103c4e2a3a69491a2ea2c ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index c1dbdc5f19..f3f4649c5d 100644
+index 7cef9bd3ff..7e4ead1875 100644
@@ -29,3 +31,3 @@
-@@ -88,6 +88,9 @@ mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
- static void
- mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment);
+@@ -110,6 +110,9 @@ mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
+ struct mlx5_tbl_multi_pattern_ctx *mpat,
+ struct rte_flow_error *error);
@@ -39 +41 @@
-@@ -5820,6 +5823,69 @@ mlx5_decap_encap_reformat_type(const struct rte_flow_action *actions,
+@@ -5456,6 +5459,69 @@ mlx5_decap_encap_reformat_type(const struct rte_flow_action *actions,
@@ -109 +111 @@
-@@ -5856,6 +5922,7 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
+@@ -5492,6 +5558,7 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
@@ -117 +119 @@
-@@ -5886,6 +5953,13 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
+@@ -5522,6 +5589,13 @@ flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
@@ -131 +133 @@
-@@ -6891,6 +6965,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
+@@ -6476,6 +6550,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: remove device status check in flow creation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (59 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix flow action template expansion' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'test: fix probing in secondary process' " Xueming Li
` (62 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Bing Zhao; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=272feb8eb990b5ef05604e0359d39dc5650f449c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 272feb8eb990b5ef05604e0359d39dc5650f449c Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Thu, 29 Feb 2024 12:51:56 +0100
Subject: [PATCH] net/mlx5: remove device status check in flow creation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4095ce34095e33ae9e8b19150b9280ff8737a590 ]
The flow rule can be inserted even before the device started. The
only exception is for a queue or RSS action.
For the other interfaces of template API, the start status is not
checked. The checking would cause some cache miss or eviction since
the flag locates on some other cache line.
Fixes: f1fecffa88df ("net/mlx5: support Direct Rules action template API")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e4ead1875..6aaf3aee2a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3322,10 +3322,6 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
uint32_t res_idx = 0;
int ret;
- if (unlikely((!dev->data->dev_started))) {
- rte_errno = EINVAL;
- goto error;
- }
job = flow_hw_job_get(priv, queue);
if (!job) {
rte_errno = ENOMEM;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:06.968868057 +0800
+++ 0062-net-mlx5-remove-device-status-check-in-flow-creation.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 4095ce34095e33ae9e8b19150b9280ff8737a590 Mon Sep 17 00:00:00 2001
+From 272feb8eb990b5ef05604e0359d39dc5650f449c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4095ce34095e33ae9e8b19150b9280ff8737a590 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -19,2 +21,2 @@
- drivers/net/mlx5/mlx5_flow_hw.c | 5 -----
- 1 file changed, 5 deletions(-)
+ drivers/net/mlx5/mlx5_flow_hw.c | 4 ----
+ 1 file changed, 4 deletions(-)
@@ -23 +25 @@
-index 6f43e88864..8ca866059d 100644
+index 7e4ead1875..6aaf3aee2a 100644
@@ -26 +28 @@
-@@ -3526,11 +3526,6 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
+@@ -3322,10 +3322,6 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
@@ -31,3 +33,2 @@
-- rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-- "Port must be started before enqueueing flow operations");
-- return NULL;
+- rte_errno = EINVAL;
+- goto error;
@@ -35,3 +36,3 @@
- flow = mlx5_ipool_malloc(table->flow, &flow_idx);
- if (!flow)
- goto error;
+ job = flow_hw_job_get(priv, queue);
+ if (!job) {
+ rte_errno = ENOMEM;
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test: fix probing in secondary process' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (60 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: remove device status check in flow creation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'bus/vdev: fix devargs " Xueming Li
` (61 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Mingjin Ye; +Cc: Zhimin Huang, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ef4c8a57f320f01061b399b62e9dac7836dea83b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ef4c8a57f320f01061b399b62e9dac7836dea83b Mon Sep 17 00:00:00 2001
From: Mingjin Ye <mingjinx.ye@intel.com>
Date: Tue, 14 Nov 2023 10:28:15 +0000
Subject: [PATCH] test: fix probing in secondary process
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b3ce7891ad386310abab56352053a46ba06ca72f ]
In EAL related test cases, the allow parameters are not passed to
the secondary process, resulting in unexpected NICs being loaded.
This patch fixes this issue by appending the allow parameters to
the secondary process.
Fixes: af75078fece3 ("first public release")
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
Tested-by: Zhimin Huang <zhiminx.huang@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/process.h | 46 ++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 42 insertions(+), 4 deletions(-)
diff --git a/app/test/process.h b/app/test/process.h
index c576c42349..9fb2bf481c 100644
--- a/app/test/process.h
+++ b/app/test/process.h
@@ -17,6 +17,7 @@
#include <dirent.h>
#include <rte_string_fns.h> /* strlcpy */
+#include <rte_devargs.h>
#ifdef RTE_EXEC_ENV_FREEBSD
#define self "curproc"
@@ -34,6 +35,34 @@ extern uint16_t flag_for_send_pkts;
#endif
#endif
+#define PREFIX_ALLOW "--allow="
+
+static int
+add_parameter_allow(char **argv, int max_capacity)
+{
+ struct rte_devargs *devargs;
+ int count = 0;
+
+ RTE_EAL_DEVARGS_FOREACH(NULL, devargs) {
+ if (strlen(devargs->name) == 0)
+ continue;
+
+ if (devargs->data == NULL || strlen(devargs->data) == 0) {
+ if (asprintf(&argv[count], PREFIX_ALLOW"%s", devargs->name) < 0)
+ break;
+ } else {
+ if (asprintf(&argv[count], PREFIX_ALLOW"%s,%s",
+ devargs->name, devargs->data) < 0)
+ break;
+ }
+
+ if (++count == max_capacity)
+ break;
+ }
+
+ return count;
+}
+
/*
* launches a second copy of the test process using the given argv parameters,
* which should include argv[0] as the process name. To identify in the
@@ -43,8 +72,10 @@ extern uint16_t flag_for_send_pkts;
static inline int
process_dup(const char *const argv[], int numargs, const char *env_value)
{
- int num;
- char *argv_cpy[numargs + 1];
+ int num = 0;
+ char **argv_cpy;
+ int allow_num;
+ int argv_num;
int i, status;
char path[32];
#ifdef RTE_LIB_PDUMP
@@ -58,14 +89,21 @@ process_dup(const char *const argv[], int numargs, const char *env_value)
if (pid < 0)
return -1;
else if (pid == 0) {
+ allow_num = rte_devargs_type_count(RTE_DEVTYPE_ALLOWED);
+ argv_num = numargs + allow_num + 1;
+ argv_cpy = calloc(argv_num, sizeof(char *));
+ if (!argv_cpy)
+ rte_panic("Memory allocation failed\n");
+
/* make a copy of the arguments to be passed to exec */
for (i = 0; i < numargs; i++) {
argv_cpy[i] = strdup(argv[i]);
if (argv_cpy[i] == NULL)
rte_panic("Error dup args\n");
}
- argv_cpy[i] = NULL;
- num = numargs;
+ if (allow_num > 0)
+ num = add_parameter_allow(&argv_cpy[i], allow_num);
+ num += numargs;
#ifdef RTE_EXEC_ENV_LINUX
{
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.000578116 +0800
+++ 0063-test-fix-probing-in-secondary-process.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From b3ce7891ad386310abab56352053a46ba06ca72f Mon Sep 17 00:00:00 2001
+From ef4c8a57f320f01061b399b62e9dac7836dea83b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b3ce7891ad386310abab56352053a46ba06ca72f ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'bus/vdev: fix devargs in secondary process' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (61 preceding siblings ...)
2024-04-13 12:49 ` patch 'test: fix probing in secondary process' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'config: fix CPU instruction set for cross-build' " Xueming Li
` (60 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Mingjin Ye; +Cc: Anatoly Burakov, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6148604a438aad21e5d74f414976666b370582c1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6148604a438aad21e5d74f414976666b370582c1 Mon Sep 17 00:00:00 2001
From: Mingjin Ye <mingjinx.ye@intel.com>
Date: Fri, 1 Sep 2023 07:24:09 +0000
Subject: [PATCH] bus/vdev: fix devargs in secondary process
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6666628362c94a0b567a39a0177539c12c97d999 ]
When a device is created by a secondary process, an empty devargs is
temporarily generated and bound to it. This causes the device to not
be associated with the correct devargs, and the empty devargs are not
released when the resource is freed.
This patch fixes the issue by matching the devargs when inserting a
device in secondary process.
Fixes: dda987315ca2 ("vdev: make virtual bus use its device struct")
Fixes: a16040453968 ("eal: extract vdev infra")
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
drivers/bus/vdev/vdev.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 05582f1727..14cf856237 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -263,6 +263,22 @@ alloc_devargs(const char *name, const char *args)
return devargs;
}
+static struct rte_devargs *
+vdev_devargs_lookup(const char *name)
+{
+ struct rte_devargs *devargs;
+ char dev_name[32];
+
+ RTE_EAL_DEVARGS_FOREACH("vdev", devargs) {
+ devargs->bus->parse(devargs->name, &dev_name);
+ if (strcmp(dev_name, name) == 0) {
+ VDEV_LOG(INFO, "devargs matched %s", dev_name);
+ return devargs;
+ }
+ }
+ return NULL;
+}
+
static int
insert_vdev(const char *name, const char *args,
struct rte_vdev_device **p_dev,
@@ -275,7 +291,11 @@ insert_vdev(const char *name, const char *args,
if (name == NULL)
return -EINVAL;
- devargs = alloc_devargs(name, args);
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ devargs = alloc_devargs(name, args);
+ else
+ devargs = vdev_devargs_lookup(name);
+
if (!devargs)
return -ENOMEM;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.025817983 +0800
+++ 0064-bus-vdev-fix-devargs-in-secondary-process.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 6666628362c94a0b567a39a0177539c12c97d999 Mon Sep 17 00:00:00 2001
+From 6148604a438aad21e5d74f414976666b370582c1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6666628362c94a0b567a39a0177539c12c97d999 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'config: fix CPU instruction set for cross-build' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (62 preceding siblings ...)
2024-04-13 12:49 ` patch 'bus/vdev: fix devargs " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'test/mbuf: fix external mbuf case with assert enabled' " Xueming Li
` (59 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Joyce Kong; +Cc: Ruifeng Wang, Stephen Hemminger, Pavan Nikhilesh, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ced51dd5efede54fd04d0b1f7b4da3c5785df7bf
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ced51dd5efede54fd04d0b1f7b4da3c5785df7bf Mon Sep 17 00:00:00 2001
From: Joyce Kong <joyce.kong@arm.com>
Date: Tue, 5 Dec 2023 03:52:58 +0000
Subject: [PATCH] config: fix CPU instruction set for cross-build
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d74543f8ad30db164c08ec69910b05d6811b1b89 ]
The platform value would be 'native' only when not cross build.
Move the operation about modifying cpu_instruction_set while
platform equals 'native' to the not cross build branch.
Fixes: bf66003b51ec ("build: use platform for generic and native builds")
Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/meson.build | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/config/meson.build b/config/meson.build
index 65662c5de3..898b719929 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -121,13 +121,14 @@ else
cpu_instruction_set = 'generic'
endif
endif
+ if platform == 'native'
+ if cpu_instruction_set == 'auto'
+ cpu_instruction_set = 'native'
+ endif
+ endif
endif
-if platform == 'native'
- if cpu_instruction_set == 'auto'
- cpu_instruction_set = 'native'
- endif
-elif platform == 'generic'
+if platform == 'generic'
if cpu_instruction_set == 'auto'
cpu_instruction_set = 'generic'
endif
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.050449350 +0800
+++ 0065-config-fix-CPU-instruction-set-for-cross-build.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From d74543f8ad30db164c08ec69910b05d6811b1b89 Mon Sep 17 00:00:00 2001
+From ced51dd5efede54fd04d0b1f7b4da3c5785df7bf Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d74543f8ad30db164c08ec69910b05d6811b1b89 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 8cb6429313..bbb931a457 100644
+index 65662c5de3..898b719929 100644
@@ -25 +27 @@
-@@ -128,13 +128,14 @@ else
+@@ -121,13 +121,14 @@ else
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test/mbuf: fix external mbuf case with assert enabled' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (63 preceding siblings ...)
2024-04-13 12:49 ` patch 'config: fix CPU instruction set for cross-build' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'test: assume C source files are UTF-8 encoded' " Xueming Li
` (58 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: Olivier Matz, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=de3976eb27efa755499f0982e0be846e305e7226
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From de3976eb27efa755499f0982e0be846e305e7226 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Thu, 23 Nov 2023 12:12:21 +0530
Subject: [PATCH] test/mbuf: fix external mbuf case with assert enabled
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6dbaa4ee67135ac6ff8ef35fa98a93e0f08af494 ]
when RTE_ENABLE_ASSERT is defined test_mbuf application is
failing because we are trying to attach extbuf to a cloned
buffer to which external mbuf is already attached.
To make test_mbuf pass CI we have updated ol_flags.
This patch fixes the same.
Fixes: 7b295dceea07 ("test/mbuf: add unit test cases")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/test_mbuf.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index d7393df7eb..a39288a5f8 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -2345,16 +2345,13 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
__func__);
- /* allocate one more mbuf */
+ /* allocate one more mbuf, it is attached to the same external buffer */
clone = rte_pktmbuf_clone(m, pktmbuf_pool);
if (clone == NULL)
GOTO_FAIL("%s: mbuf clone allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(clone) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- /* attach the same external buffer to the cloned mbuf */
- rte_pktmbuf_attach_extbuf(clone, ext_buf_addr, buf_iova, buf_len,
- ret_shinfo);
if (clone->ol_flags != RTE_MBUF_F_EXTERNAL)
GOTO_FAIL("%s: External buffer is not attached to mbuf\n",
__func__);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.075992217 +0800
+++ 0066-test-mbuf-fix-external-mbuf-case-with-assert-enabled.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 6dbaa4ee67135ac6ff8ef35fa98a93e0f08af494 Mon Sep 17 00:00:00 2001
+From de3976eb27efa755499f0982e0be846e305e7226 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6dbaa4ee67135ac6ff8ef35fa98a93e0f08af494 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 51ea6ef1c4..17be977f31 100644
+index d7393df7eb..a39288a5f8 100644
@@ -25 +27 @@
-@@ -2346,16 +2346,13 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
+@@ -2345,16 +2345,13 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test: assume C source files are UTF-8 encoded' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (64 preceding siblings ...)
2024-04-13 12:49 ` patch 'test/mbuf: fix external mbuf case with assert enabled' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'test: do not count skipped tests as executed' " Xueming Li
` (57 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Robin Jarry
Cc: Bruce Richardson, Morten Brørup, Tyler Retzlaff, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5c5df0f29203b18a22deb19a5046152961b07d12
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5c5df0f29203b18a22deb19a5046152961b07d12 Mon Sep 17 00:00:00 2001
From: Robin Jarry <rjarry@redhat.com>
Date: Tue, 5 Mar 2024 14:46:15 +0100
Subject: [PATCH] test: assume C source files are UTF-8 encoded
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 39846d56c48ba9a5da23f4a8af40cd46b47e9367 ]
Instead of relying on the default locale from the environment (LC_ALL),
explicitly read the files as utf-8 encoded.
Fixes: 0aeaf75df879 ("test: define unit tests suites based on test types")
Signed-off-by: Robin Jarry <rjarry@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
buildtools/get-test-suites.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/buildtools/get-test-suites.py b/buildtools/get-test-suites.py
index 574c233aa8..c61f6a273f 100644
--- a/buildtools/get-test-suites.py
+++ b/buildtools/get-test-suites.py
@@ -19,7 +19,7 @@ def get_fast_test_params(test_name, ln):
return f":{nohuge.strip().lower()}:{asan.strip().lower()}"
for fname in input_list:
- with open(fname) as f:
+ with open(fname, "r", encoding="utf-8") as f:
contents = [ln.strip() for ln in f.readlines()]
test_lines = [ln for ln in contents if test_def_regex.match(ln)]
non_suite_tests.extend([non_suite_regex.match(ln).group(1)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.103063882 +0800
+++ 0067-test-assume-C-source-files-are-UTF-8-encoded.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 39846d56c48ba9a5da23f4a8af40cd46b47e9367 Mon Sep 17 00:00:00 2001
+From 5c5df0f29203b18a22deb19a5046152961b07d12 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 39846d56c48ba9a5da23f4a8af40cd46b47e9367 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test: do not count skipped tests as executed' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (65 preceding siblings ...)
2024-04-13 12:49 ` patch 'test: assume C source files are UTF-8 encoded' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'examples/packet_ordering: fix Rx with reorder mode disabled' " Xueming Li
` (56 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Akhil Goyal, Ciara Power, Tyler Retzlaff, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e8dccbca30cec3504b043b3e4da6951c0f9f8168
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e8dccbca30cec3504b043b3e4da6951c0f9f8168 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Mon, 13 Nov 2023 15:05:33 +0000
Subject: [PATCH] test: do not count skipped tests as executed
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a620df6df6d61660661afade09760b2dfba4eb42 ]
The logic around skipped tests is a little confusing in the unit test
runner.
* Any explicitly disabled tests are counted as skipped but not
executed.
* Any tests that return TEST_SKIPPED are counted as both skipped and
executed, using the same statistics counters.
This makes the stats very strange and hard to correlate, since the
totals don't add up. One would expect that SKIPPED + EXECUTED +
UNSUPPORTED == TOTAL, and that PASSED + FAILED == EXECUTED.
To achieve this, mark any tests returning TEST_SKIPPED, or ENOTSUP as
not having executed.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
app/test/test.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/app/test/test.c b/app/test/test.c
index bfa9ea52e3..7b882a59de 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -375,11 +375,13 @@ unit_test_suite_runner(struct unit_test_suite *suite)
if (test_success == TEST_SUCCESS)
suite->succeeded++;
- else if (test_success == TEST_SKIPPED)
+ else if (test_success == TEST_SKIPPED) {
suite->skipped++;
- else if (test_success == -ENOTSUP)
+ suite->executed--;
+ } else if (test_success == -ENOTSUP) {
suite->unsupported++;
- else
+ suite->executed--;
+ } else
suite->failed++;
} else if (test_success == -ENOTSUP) {
suite->unsupported++;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.128290849 +0800
+++ 0068-test-do-not-count-skipped-tests-as-executed.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From a620df6df6d61660661afade09760b2dfba4eb42 Mon Sep 17 00:00:00 2001
+From e8dccbca30cec3504b043b3e4da6951c0f9f8168 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a620df6df6d61660661afade09760b2dfba4eb42 ]
@@ -20,2 +22,0 @@
-Cc: stable@dpdk.org
-
@@ -31 +32 @@
-index 8b25615913..680351f6a3 100644
+index bfa9ea52e3..7b882a59de 100644
@@ -34 +35 @@
-@@ -369,11 +369,13 @@ unit_test_suite_runner(struct unit_test_suite *suite)
+@@ -375,11 +375,13 @@ unit_test_suite_runner(struct unit_test_suite *suite)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'examples/packet_ordering: fix Rx with reorder mode disabled' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (66 preceding siblings ...)
2024-04-13 12:49 ` patch 'test: do not count skipped tests as executed' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'examples/l3fwd: fix Rx over not ready port' " Xueming Li
` (55 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Qian Hao; +Cc: Volodymyr Fialko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=10296d5f506e4ad4e5c7bb2c32a1f6369ffdcd9c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 10296d5f506e4ad4e5c7bb2c32a1f6369ffdcd9c Mon Sep 17 00:00:00 2001
From: Qian Hao <qi_an_hao@126.com>
Date: Wed, 13 Dec 2023 19:07:18 +0800
Subject: [PATCH] examples/packet_ordering: fix Rx with reorder mode disabled
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7ba49dc729937ea97642a615e9b08f33919b94f4 ]
The packet_ordering example works in two modes (opt via --disable-reorder):
- When reorder is enabled: rx_thread - N*worker_thread - send_thread
- When reorder is disabled: rx_thread - N*worker_thread - tx_thread
N parallel worker_thread(s) generate out-of-order packets.
When reorder is enabled, send_thread uses sequence number generated in
rx_thread (L459) to enforce packet ordering. Otherwise rx_thread just
sends any packet it receives.
rx_thread writes sequence number into a dynamic field, which is only
registered by calling rte_reorder_create() (Line 741) when reorder is
enabled. However, rx_thread marks sequence number onto each packet no
matter whether reorder is enabled, overwriting the leading bytes in packet
mbufs when reorder is disabled, resulting in segfaults when PMD tries to
DMA packets.
`if (!disable_reorder_flag) {...}` is added in rx_thread to fix the bug.
The test is inlined by the compiler to prevent any performance loss.
Signed-off-by: Qian Hao <qi_an_hao@126.com>
Acked-by: Volodymyr Fialko <vfialko@marvell.com>
---
.mailmap | 1 +
examples/packet_ordering/main.c | 32 +++++++++++++++++++++++++-------
2 files changed, 26 insertions(+), 7 deletions(-)
diff --git a/.mailmap b/.mailmap
index 9541b3b02e..daa1f52205 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1131,6 +1131,7 @@ Przemyslaw Czesnowicz <przemyslaw.czesnowicz@intel.com>
Przemyslaw Patynowski <przemyslawx.patynowski@intel.com>
Przemyslaw Zegan <przemyslawx.zegan@intel.com>
Pu Xu <583493798@qq.com>
+Qian Hao <qi_an_hao@126.com>
Qian Xu <qian.q.xu@intel.com>
Qiao Liu <qiao.liu@intel.com>
Qi Fu <qi.fu@intel.com>
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d2fd6f77e4..f839db9102 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -5,6 +5,7 @@
#include <stdlib.h>
#include <signal.h>
#include <getopt.h>
+#include <stdbool.h>
#include <rte_eal.h>
#include <rte_common.h>
@@ -427,8 +428,8 @@ int_handler(int sig_num)
* The mbufs are then passed to the worker threads via the rx_to_workers
* ring.
*/
-static int
-rx_thread(struct rte_ring *ring_out)
+static __rte_always_inline int
+rx_thread(struct rte_ring *ring_out, bool disable_reorder_flag)
{
uint32_t seqn = 0;
uint16_t i, ret = 0;
@@ -454,9 +455,11 @@ rx_thread(struct rte_ring *ring_out)
}
app_stats.rx.rx_pkts += nb_rx_pkts;
- /* mark sequence number */
- for (i = 0; i < nb_rx_pkts; )
- *rte_reorder_seqn(pkts[i++]) = seqn++;
+ /* mark sequence number if reorder is enabled */
+ if (!disable_reorder_flag) {
+ for (i = 0; i < nb_rx_pkts;)
+ *rte_reorder_seqn(pkts[i++]) = seqn++;
+ }
/* enqueue to rx_to_workers ring */
ret = rte_ring_enqueue_burst(ring_out,
@@ -473,6 +476,18 @@ rx_thread(struct rte_ring *ring_out)
return 0;
}
+static __rte_noinline int
+rx_thread_reorder(struct rte_ring *ring_out)
+{
+ return rx_thread(ring_out, false);
+}
+
+static __rte_noinline int
+rx_thread_reorder_disabled(struct rte_ring *ring_out)
+{
+ return rx_thread(ring_out, true);
+}
+
/**
* This thread takes bursts of packets from the rx_to_workers ring and
* Changes the input port value to output port value. And feds it to
@@ -772,8 +787,11 @@ main(int argc, char **argv)
(void *)&send_args, last_lcore_id);
}
- /* Start rx_thread() on the main core */
- rx_thread(rx_to_workers);
+ /* Start rx_thread_xxx() on the main core */
+ if (disable_reorder)
+ rx_thread_reorder_disabled(rx_to_workers);
+ else
+ rx_thread_reorder(rx_to_workers);
RTE_LCORE_FOREACH_WORKER(lcore_id) {
if (rte_eal_wait_lcore(lcore_id) < 0)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.152866717 +0800
+++ 0069-examples-packet_ordering-fix-Rx-with-reorder-mode-di.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 7ba49dc729937ea97642a615e9b08f33919b94f4 Mon Sep 17 00:00:00 2001
+From 10296d5f506e4ad4e5c7bb2c32a1f6369ffdcd9c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7ba49dc729937ea97642a615e9b08f33919b94f4 ]
@@ -25,2 +27,0 @@
-Cc: stable@dpdk.org
-
@@ -35 +36 @@
-index 1b346f630f..55913d0450 100644
+index 9541b3b02e..daa1f52205 100644
@@ -38 +39 @@
-@@ -1142,6 +1142,7 @@ Przemyslaw Czesnowicz <przemyslaw.czesnowicz@intel.com>
+@@ -1131,6 +1131,7 @@ Przemyslaw Czesnowicz <przemyslaw.czesnowicz@intel.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'examples/l3fwd: fix Rx over not ready port' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (67 preceding siblings ...)
2024-04-13 12:49 ` patch 'examples/packet_ordering: fix Rx with reorder mode disabled' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'dts: fix smoke tests driver regex' " Xueming Li
` (54 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Konstantin Ananyev, Pavan Nikhilesh, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ce95b8c9cd764820f0364755f97809549f58e76f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ce95b8c9cd764820f0364755f97809549f58e76f Mon Sep 17 00:00:00 2001
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Date: Fri, 1 Mar 2024 16:39:31 +0000
Subject: [PATCH] examples/l3fwd: fix Rx over not ready port
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 495709d32be87cb962c55917b19ad7d4489cde3e ]
Running l3fwd in event mode with SW eventdev, service cores
can start RX before main thread is finished with PMD installation.
to reproduce:
./dpdk-l3fwd --lcores=49,51 -n 6 -a ca:00.0 -s 0x8000000000000 \
--vdev event_sw0 -- \
-L -P -p 1 --mode eventdev --eventq-sched=ordered \
--rule_ipv4=test/l3fwd_lpm_v4_u1.cfg --rule_ipv6=test/l3fwd_lpm_v6_u1.cfg \
--no-numa
At init stage user will most likely see the error message like that:
ETHDEV: lcore 51 called rx_pkt_burst for not ready port 0
0: ./dpdk-l3fwd (rte_dump_stack+0x1f) [15de723]
...
9: ./dpdk-l3fwd (eal_thread_loop+0x5a2) [15c1324]
...
And then all depends how luck/unlucky you are.
If there are some actual packet in HW RX queue, then the app will most
likely crash, otherwise it might survive.
As error message suggests, the problem is that services are started
before main thread finished with NIC setup and initialization.
The suggested fix moves services startup after NIC setup phase.
Bugzilla ID: 1390
Fixes: 8bd537e9c6cf ("examples/l3fwd: add service core setup based on caps")
Signed-off-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l3fwd/main.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 3bf28aec0c..d4fb5d1971 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1577,7 +1577,6 @@ main(int argc, char **argv)
l3fwd_lkp.main_loop = evt_rsrc->ops.fib_event_loop;
else
l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
- l3fwd_event_service_setup();
} else
#endif
l3fwd_poll_resource_setup();
@@ -1609,6 +1608,11 @@ main(int argc, char **argv)
}
}
+#ifdef RTE_LIB_EVENTDEV
+ if (evt_rsrc->enabled)
+ l3fwd_event_service_setup();
+#endif
+
printf("\n");
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.179359982 +0800
+++ 0070-examples-l3fwd-fix-Rx-over-not-ready-port.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 495709d32be87cb962c55917b19ad7d4489cde3e Mon Sep 17 00:00:00 2001
+From ce95b8c9cd764820f0364755f97809549f58e76f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 495709d32be87cb962c55917b19ad7d4489cde3e ]
@@ -31 +33,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'dts: fix smoke tests driver regex' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (68 preceding siblings ...)
2024-04-13 12:49 ` patch 'examples/l3fwd: fix Rx over not ready port' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'examples/l3fwd: fix Rx queue configuration' " Xueming Li
` (53 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Juraj Linkeš; +Cc: Jeremy Spewock, Nicholas Pratte, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2f8836901cbbe9221ab2495ef5ee3caf4e76d2b7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2f8836901cbbe9221ab2495ef5ee3caf4e76d2b7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Juraj=20Linke=C5=A1?= <juraj.linkes@pantheon.tech>
Date: Fri, 23 Feb 2024 09:30:01 +0100
Subject: [PATCH] dts: fix smoke tests driver regex
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 409359adce43360a77c2675ff0e228da67ae3c96 ]
Add hyphen to the regex, which is needed for drivers such as vfio-pci.
Fixes: 88489c0501af ("dts: add smoke tests")
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
Tested-by: Nicholas Pratte <npratte@iol.unh.edu>
---
.mailmap | 1 +
dts/tests/TestSuite_smoke_tests.py | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index daa1f52205..f76fef1c48 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1013,6 +1013,7 @@ Nemanja Marjanovic <nemanja.marjanovic@intel.com>
Netanel Belgazal <netanel@amazon.com>
Netanel Gonen <netanelg@mellanox.com>
Niall Power <niall.power@intel.com>
+Nicholas Pratte <npratte@iol.unh.edu>
Nick Connolly <nick.connolly@mayadata.io>
Nick Nunley <nicholas.d.nunley@intel.com>
Niclas Storm <niclas.storm@ericsson.com>
diff --git a/dts/tests/TestSuite_smoke_tests.py b/dts/tests/TestSuite_smoke_tests.py
index 8958f58dac..5e897cf5d2 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -91,7 +91,7 @@ class SmokeTests(TestSuite):
# with the address for the nic we are on in the loop and then captures the
# name of the driver in a group
devbind_info_for_nic = re.search(
- f"{nic.pci}[^\\n]*drv=([\\d\\w]*) [^\\n]*",
+ f"{nic.pci}[^\\n]*drv=([\\d\\w-]*) [^\\n]*",
all_nics_in_dpdk_devbind,
)
self.verify(
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.204000550 +0800
+++ 0071-dts-fix-smoke-tests-driver-regex.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 409359adce43360a77c2675ff0e228da67ae3c96 Mon Sep 17 00:00:00 2001
+From 2f8836901cbbe9221ab2495ef5ee3caf4e76d2b7 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 409359adce43360a77c2675ff0e228da67ae3c96 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 55913d0450..68b4cae8d3 100644
+index daa1f52205..f76fef1c48 100644
@@ -26 +28 @@
-@@ -1020,6 +1020,7 @@ Nemanja Marjanovic <nemanja.marjanovic@intel.com>
+@@ -1013,6 +1013,7 @@ Nemanja Marjanovic <nemanja.marjanovic@intel.com>
@@ -35 +37 @@
-index 5e2bac14bd..1be5c3047e 100644
+index 8958f58dac..5e897cf5d2 100644
@@ -38 +40 @@
-@@ -130,7 +130,7 @@ class SmokeTests(TestSuite):
+@@ -91,7 +91,7 @@ class SmokeTests(TestSuite):
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'examples/l3fwd: fix Rx queue configuration' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (69 preceding siblings ...)
2024-04-13 12:49 ` patch 'dts: fix smoke tests driver regex' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/virtio: fix vDPA device init advertising control queue' " Xueming Li
` (52 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Kamil Vojanec; +Cc: Konstantin Ananyev, Kevin Traynor, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=587143897e3f081134d895adb59526e748d8e7f6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 587143897e3f081134d895adb59526e748d8e7f6 Mon Sep 17 00:00:00 2001
From: Kamil Vojanec <vojanec@cesnet.cz>
Date: Fri, 16 Feb 2024 13:02:07 +0100
Subject: [PATCH] examples/l3fwd: fix Rx queue configuration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7015f232adf0e1622822660afc70055bf359bd7a ]
When configuring Rx queues, the default port configuration was used,
even though it was modified before. This results in the
'relax-rx-offload' not being respected for Rx queues.
This commit uses 'rte_eth_dev_conf_get()' to obtain the device
configuration structure instead.
Fixes: 4b01cabfb09b ("examples/l3fwd: add option to relax Rx offload")
Signed-off-by: Kamil Vojanec <vojanec@cesnet.cz>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
---
.mailmap | 2 +-
examples/l3fwd/main.c | 9 ++++++++-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/.mailmap b/.mailmap
index f76fef1c48..debb7beb5f 100644
--- a/.mailmap
+++ b/.mailmap
@@ -723,7 +723,7 @@ Kamalakshitha Aligeri <kamalakshitha.aligeri@arm.com>
Kamil Bednarczyk <kamil.bednarczyk@intel.com>
Kamil Chalupnik <kamilx.chalupnik@intel.com>
Kamil Rytarowski <kamil.rytarowski@caviumnetworks.com>
-Kamil Vojanec <xvojan00@stud.fit.vutbr.cz>
+Kamil Vojanec <vojanec@cesnet.cz> <xvojan00@stud.fit.vutbr.cz>
Kanaka Durga Kotamarthy <kkotamarthy@marvell.com>
Karen Kelly <karen.kelly@intel.com>
Karen Sornek <karen.sornek@intel.com>
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index d4fb5d1971..8d32ae1dd5 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1388,6 +1388,7 @@ l3fwd_poll_resource_setup(void)
fflush(stdout);
/* init RX queues */
for(queue = 0; queue < qconf->n_rx_queue; ++queue) {
+ struct rte_eth_conf local_conf;
struct rte_eth_rxconf rxq_conf;
portid = qconf->rx_queue_list[queue].port_id;
@@ -1408,8 +1409,14 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = rte_eth_dev_conf_get(portid, &local_conf);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Error during getting device (port %u) configuration: %s\n",
+ portid, strerror(-ret));
+
rxq_conf = dev_info.default_rxconf;
- rxq_conf.offloads = port_conf.rxmode.offloads;
+ rxq_conf.offloads = local_conf.rxmode.offloads;
if (!per_port_pool)
ret = rte_eth_rx_queue_setup(portid, queueid,
nb_rxd, socketid,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.229162617 +0800
+++ 0072-examples-l3fwd-fix-Rx-queue-configuration.patch 2024-04-13 20:43:05.017753905 +0800
@@ -1 +1 @@
-From 7015f232adf0e1622822660afc70055bf359bd7a Mon Sep 17 00:00:00 2001
+From 587143897e3f081134d895adb59526e748d8e7f6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7015f232adf0e1622822660afc70055bf359bd7a ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 68b4cae8d3..66ebc20666 100644
+index f76fef1c48..debb7beb5f 100644
@@ -27 +29 @@
-@@ -728,7 +728,7 @@ Kamalakshitha Aligeri <kamalakshitha.aligeri@arm.com>
+@@ -723,7 +723,7 @@ Kamalakshitha Aligeri <kamalakshitha.aligeri@arm.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/virtio: fix vDPA device init advertising control queue' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (70 preceding siblings ...)
2024-04-13 12:49 ` patch 'examples/l3fwd: fix Rx queue configuration' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'build: pass cflags in subproject' " Xueming Li
` (51 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7105c8a2997b364313c06f7d4cbc62a64eb61dec
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7105c8a2997b364313c06f7d4cbc62a64eb61dec Mon Sep 17 00:00:00 2001
From: Maxime Coquelin <maxime.coquelin@redhat.com>
Date: Wed, 13 Mar 2024 13:59:31 +0100
Subject: [PATCH] net/virtio: fix vDPA device init advertising control queue
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 58c894154165f24de8344383e2d1d5cccf4a42bc ]
If the vDPA device advertises control queue support, but
the user neither passes "cq=1" as devarg nor requests
multiple queues, the initialization fails because the
driver tries to setup the control queue without negotiating
related feature.
This patch enables the control queue at driver level as
soon as the device claims to support it, and not only when
multiple queue pairs are requested. Also, enable the
control queue event if multiqueue feature has not been
negotiated and device start time, and disable it at device
stop time.
Fixes: b277308e8868 ("net/virtio-user: advertise control VQ support with vDPA")
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
.../net/virtio/virtio_user/virtio_user_dev.c | 22 ++++++++++++++-----
1 file changed, 16 insertions(+), 6 deletions(-)
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index af1f8c8237..4fd89a8e97 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -215,6 +215,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
if (ret < 0)
goto error;
+ if (dev->scvq) {
+ ret = dev->ops->cvq_enable(dev, 1);
+ if (ret < 0)
+ goto error;
+ }
+
dev->started = true;
pthread_mutex_unlock(&dev->mutex);
@@ -247,6 +253,12 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
goto err;
}
+ if (dev->scvq) {
+ ret = dev->ops->cvq_enable(dev, 0);
+ if (ret < 0)
+ goto err;
+ }
+
/* Stop the backend. */
for (i = 0; i < dev->max_queue_pairs * 2; ++i) {
state.index = i;
@@ -725,7 +737,7 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
if (virtio_user_dev_init_max_queue_pairs(dev, queues))
dev->unsupported_features |= (1ull << VIRTIO_NET_F_MQ);
- if (dev->max_queue_pairs > 1)
+ if (dev->max_queue_pairs > 1 || dev->hw_cvq)
cq = 1;
if (!mrg_rxbuf)
@@ -743,8 +755,9 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
dev->unsupported_features |= (1ull << VIRTIO_NET_F_MAC);
if (cq) {
- /* device does not really need to know anything about CQ,
- * so if necessary, we just claim to support CQ
+ /* Except for vDPA, the device does not really need to know
+ * anything about CQ, so if necessary, we just claim to support
+ * control queue.
*/
dev->frontend_features |= (1ull << VIRTIO_NET_F_CTRL_VQ);
} else {
@@ -844,9 +857,6 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
for (i = q_pairs; i < dev->max_queue_pairs; ++i)
ret |= dev->ops->enable_qp(dev, i, 0);
- if (dev->scvq)
- ret |= dev->ops->cvq_enable(dev, 1);
-
dev->queue_pairs = q_pairs;
return ret;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.252100287 +0800
+++ 0073-net-virtio-fix-vDPA-device-init-advertising-control-.patch 2024-04-13 20:43:05.027753892 +0800
@@ -1 +1 @@
-From 58c894154165f24de8344383e2d1d5cccf4a42bc Mon Sep 17 00:00:00 2001
+From 7105c8a2997b364313c06f7d4cbc62a64eb61dec Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 58c894154165f24de8344383e2d1d5cccf4a42bc ]
@@ -20 +22,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index d395fc1676..54187fedf5 100644
+index af1f8c8237..4fd89a8e97 100644
@@ -32 +34 @@
-@@ -216,6 +216,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
+@@ -215,6 +215,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
@@ -45 +47 @@
-@@ -248,6 +254,12 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
+@@ -247,6 +253,12 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
@@ -58 +60 @@
-@@ -752,7 +764,7 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+@@ -725,7 +737,7 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
@@ -67 +69 @@
-@@ -770,8 +782,9 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+@@ -743,8 +755,9 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
@@ -79 +81 @@
-@@ -871,9 +884,6 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
+@@ -844,9 +857,6 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'build: pass cflags in subproject' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (71 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/virtio: fix vDPA device init advertising control queue' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'examples/ipsec-secgw: fix cryptodev to SA mapping' " Xueming Li
` (50 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Robin Jarry; +Cc: David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9796ac2ab87502888fc1a1d7252233d132b6ae41
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9796ac2ab87502888fc1a1d7252233d132b6ae41 Mon Sep 17 00:00:00 2001
From: Robin Jarry <rjarry@redhat.com>
Date: Fri, 8 Mar 2024 12:58:40 +0100
Subject: [PATCH] build: pass cflags in subproject
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3f9aa55632bd272981471516608a7eaf543bea37 ]
When DPDK is used as a subproject, include the required compile
arguments so that the parent project is also built with the appropriate
cflags (most importantly -march). Use the same cflags as pkg-config.
Fixes: f93a605f2d6e ("build: add definitions for use as Meson subproject")
Signed-off-by: Robin Jarry <rjarry@redhat.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
buildtools/subproject/meson.build | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/buildtools/subproject/meson.build b/buildtools/subproject/meson.build
index 322e01c029..203c5d36c6 100644
--- a/buildtools/subproject/meson.build
+++ b/buildtools/subproject/meson.build
@@ -2,10 +2,15 @@
# Copyright(c) 2022 Intel Corporation
message('DPDK subproject linking: ' + get_option('default_library'))
+subproject_cflags = ['-include', 'rte_config.h'] + machine_args
+if is_freebsd
+ subproject_cflags += ['-D__BSD_VISIBLE']
+endif
if get_option('default_library') == 'static'
dpdk_dep = declare_dependency(
version: meson.project_version(),
dependencies: dpdk_static_lib_deps,
+ compile_args: subproject_cflags,
# static library deps in DPDK build don't include "link_with" parameters,
# so explicitly link-in both libs and drivers
link_whole: dpdk_static_libraries + dpdk_drivers,
@@ -13,6 +18,7 @@ if get_option('default_library') == 'static'
else
dpdk_dep = declare_dependency(
version: meson.project_version(),
+ compile_args: subproject_cflags,
# shared library deps include all necessary linking parameters
dependencies: dpdk_shared_lib_deps)
endif
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.277467454 +0800
+++ 0074-build-pass-cflags-in-subproject.patch 2024-04-13 20:43:05.027753892 +0800
@@ -1 +1 @@
-From 3f9aa55632bd272981471516608a7eaf543bea37 Mon Sep 17 00:00:00 2001
+From 9796ac2ab87502888fc1a1d7252233d132b6ae41 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3f9aa55632bd272981471516608a7eaf543bea37 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'examples/ipsec-secgw: fix cryptodev to SA mapping' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (72 preceding siblings ...)
2024-04-13 12:49 ` patch 'build: pass cflags in subproject' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'crypto/qat: fix crash with CCM null AAD pointer' " Xueming Li
` (49 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Radu Nicolau; +Cc: Ting-Kai Ku, Ciara Power, Kai Ji, Anoob Joseph, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=90d0e13d7d2651f8a32c66378d23605cd664c446
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 90d0e13d7d2651f8a32c66378d23605cd664c446 Mon Sep 17 00:00:00 2001
From: Radu Nicolau <radu.nicolau@intel.com>
Date: Tue, 27 Feb 2024 13:28:46 +0000
Subject: [PATCH] examples/ipsec-secgw: fix cryptodev to SA mapping
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f406064ff0988a53f955c74a672d696c595dc0f0 ]
There are use cases where a SA should be able to use different
cryptodevs on different lcores, for example there can be cryptodevs
with just 1 qp per VF.
Hence, the check in create lookaside session function is relaxed.
Also added a check to verify that a CQP is available for the current lcore.
Fixes: a8ade12123c3 ("examples/ipsec-secgw: create lookaside sessions at init")
Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
Tested-by: Ting-Kai Ku <ting-kai.ku@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
.mailmap | 1 +
examples/ipsec-secgw/ipsec.c | 20 ++++++++++++++++----
2 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/.mailmap b/.mailmap
index debb7beb5f..3b32923fef 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1431,6 +1431,7 @@ Timothy McDaniel <timothy.mcdaniel@intel.com>
Timothy Miskell <timothy.miskell@intel.com>
Timothy Redaelli <tredaelli@redhat.com>
Tim Shearer <tim.shearer@overturenetworks.com>
+Ting-Kai Ku <ting-kai.ku@intel.com>
Ting Xu <ting.xu@intel.com>
Tiwei Bie <tiwei.bie@intel.com> <btw@mail.ustc.edu.cn>
Todd Fujinaka <todd.fujinaka@intel.com>
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index f5cec4a928..c321108119 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -288,10 +288,21 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx_lcore[],
if (cdev_id == RTE_CRYPTO_MAX_DEVS)
cdev_id = ipsec_ctx->tbl[cdev_id_qp].id;
else if (cdev_id != ipsec_ctx->tbl[cdev_id_qp].id) {
- RTE_LOG(ERR, IPSEC,
- "SA mapping to multiple cryptodevs is "
- "not supported!");
- return -EINVAL;
+ struct rte_cryptodev_info dev_info_1, dev_info_2;
+ rte_cryptodev_info_get(cdev_id, &dev_info_1);
+ rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id,
+ &dev_info_2);
+ if (dev_info_1.driver_id == dev_info_2.driver_id) {
+ RTE_LOG(WARNING, IPSEC,
+ "SA mapped to multiple cryptodevs for SPI %d\n",
+ sa->spi);
+
+ } else {
+ RTE_LOG(WARNING, IPSEC,
+ "SA mapped to multiple cryptodevs of different types for SPI %d\n",
+ sa->spi);
+
+ }
}
/* Store per core queue pair information */
@@ -908,6 +919,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
continue;
}
+ RTE_ASSERT(sa->cqp[ipsec_ctx->lcore_id] != NULL);
enqueue_cop(sa->cqp[ipsec_ctx->lcore_id], &priv->cop);
}
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.299143126 +0800
+++ 0075-examples-ipsec-secgw-fix-cryptodev-to-SA-mapping.patch 2024-04-13 20:43:05.027753892 +0800
@@ -1 +1 @@
-From f406064ff0988a53f955c74a672d696c595dc0f0 Mon Sep 17 00:00:00 2001
+From 90d0e13d7d2651f8a32c66378d23605cd664c446 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f406064ff0988a53f955c74a672d696c595dc0f0 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 66ebc20666..50726e1232 100644
+index debb7beb5f..3b32923fef 100644
@@ -29 +31 @@
-@@ -1442,6 +1442,7 @@ Timothy McDaniel <timothy.mcdaniel@intel.com>
+@@ -1431,6 +1431,7 @@ Timothy McDaniel <timothy.mcdaniel@intel.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'crypto/qat: fix crash with CCM null AAD pointer' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (73 preceding siblings ...)
2024-04-13 12:49 ` patch 'examples/ipsec-secgw: fix cryptodev to SA mapping' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/hns3: enable PFC for all user priorities' " Xueming Li
` (48 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Arkadiusz Kusztal; +Cc: Ciara Power, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=72d3dfa9decb8de97c99c5c5ac2cd26413290d7e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 72d3dfa9decb8de97c99c5c5ac2cd26413290d7e Mon Sep 17 00:00:00 2001
From: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Date: Fri, 8 Mar 2024 08:25:12 +0000
Subject: [PATCH] crypto/qat: fix crash with CCM null AAD pointer
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e90ef1803bb34768d1446e756046020e8cbce4bc ]
This commit fixes a segfault, that occurs when NULL pointer
is being set to the AAD pointer field.
Fixes: a815a04cea05 ("crypto/qat: support symmetric build op request")
Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index b8ddf42d6f..24044bec13 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -894,10 +894,12 @@ enqueue_one_aead_job_gen1(struct qat_sym_session *ctx,
*(uint8_t *)&cipher_param->u.cipher_IV_array[0] =
q - ICP_QAT_HW_CCM_NONCE_OFFSET;
- rte_memcpy((uint8_t *)aad->va +
- ICP_QAT_HW_CCM_NONCE_OFFSET,
- (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET,
- ctx->cipher_iv.length);
+ if (ctx->aad_len > 0) {
+ rte_memcpy((uint8_t *)aad->va +
+ ICP_QAT_HW_CCM_NONCE_OFFSET,
+ (uint8_t *)iv->va + ICP_QAT_HW_CCM_NONCE_OFFSET,
+ ctx->cipher_iv.length);
+ }
break;
default:
break;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.327076989 +0800
+++ 0076-crypto-qat-fix-crash-with-CCM-null-AAD-pointer.patch 2024-04-13 20:43:05.027753892 +0800
@@ -1 +1 @@
-From e90ef1803bb34768d1446e756046020e8cbce4bc Mon Sep 17 00:00:00 2001
+From 72d3dfa9decb8de97c99c5c5ac2cd26413290d7e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e90ef1803bb34768d1446e756046020e8cbce4bc ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 60b0f0551c..1f5d2583c4 100644
+index b8ddf42d6f..24044bec13 100644
@@ -22 +24 @@
-@@ -925,10 +925,12 @@ enqueue_one_aead_job_gen1(struct qat_sym_session *ctx,
+@@ -894,10 +894,12 @@ enqueue_one_aead_job_gen1(struct qat_sym_session *ctx,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/hns3: enable PFC for all user priorities' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (74 preceding siblings ...)
2024-04-13 12:49 ` patch 'crypto/qat: fix crash with CCM null AAD pointer' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'doc: add traffic manager in features table' " Xueming Li
` (47 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Jie Hai; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cadb90f7119a2b3e45df919f941feab9029fe92f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cadb90f7119a2b3e45df919f941feab9029fe92f Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Wed, 6 Mar 2024 17:20:47 +0800
Subject: [PATCH] net/hns3: enable PFC for all user priorities
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit aae6989df36c105b917cf69975c075dfde2e6b84 ]
When user set TC number to 4 and enable PFC and ETS by dev_configure,
driver only enable user priority 0-3.
The packet with user priority 4-7 cannot trigger PFC frame.
Fix by enabling PFC for all user priorities.
By the way, the nb_tcs from user can never be 0 because of the ahead
check in driver. So remove this redundant code.
Fixes: 62e3ccc2b94c ("net/hns3: support flow control")
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_dcb.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index 2831d3dc62..915e4eb768 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1499,7 +1499,6 @@ hns3_dcb_info_update(struct hns3_adapter *hns, uint8_t num_tc)
static int
hns3_dcb_hw_configure(struct hns3_adapter *hns)
{
- struct rte_eth_dcb_rx_conf *dcb_rx_conf;
struct hns3_pf *pf = &hns->pf;
struct hns3_hw *hw = &hns->hw;
enum hns3_fc_status fc_status = hw->current_fc_status;
@@ -1519,12 +1518,8 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
}
if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
- dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
- if (dcb_rx_conf->nb_tcs == 0)
- hw->dcb_info.pfc_en = 1; /* tc0 only */
- else
- hw->dcb_info.pfc_en =
- RTE_LEN2MASK((uint8_t)dcb_rx_conf->nb_tcs, uint8_t);
+ hw->dcb_info.pfc_en =
+ RTE_LEN2MASK((uint8_t)HNS3_MAX_USER_PRIO, uint8_t);
hw->dcb_info.hw_pfc_map =
hns3_dcb_undrop_tc_map(hw, hw->dcb_info.pfc_en);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.358308448 +0800
+++ 0077-net-hns3-enable-PFC-for-all-user-priorities.patch 2024-04-13 20:43:05.027753892 +0800
@@ -1 +1 @@
-From aae6989df36c105b917cf69975c075dfde2e6b84 Mon Sep 17 00:00:00 2001
+From cadb90f7119a2b3e45df919f941feab9029fe92f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit aae6989df36c105b917cf69975c075dfde2e6b84 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: add traffic manager in features table' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (75 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/hns3: enable PFC for all user priorities' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'app/testpmd: fix async indirect action list creation' " Xueming Li
` (46 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Huisong Li
Cc: Chengwen Feng, Ferruh Yigit, Hemant Agrawal, Cristian Dumitrescu,
dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=166c5df81008cb833c577cc868dfad6bc79b155c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 166c5df81008cb833c577cc868dfad6bc79b155c Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 28 Nov 2023 20:40:56 +0800
Subject: [PATCH] doc: add traffic manager in features table
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a905e821007e884355e01289bdc67c3548591fec ]
Add Traffic Manager feature.
Fixes: 5d109deffa87 ("ethdev: add traffic management API")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
doc/guides/nics/features.rst | 13 +++++++++++++
doc/guides/nics/features/cnxk.ini | 1 +
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/dpaa2.ini | 1 +
doc/guides/nics/features/hns3.ini | 1 +
doc/guides/nics/features/i40e.ini | 1 +
doc/guides/nics/features/iavf.ini | 3 ++-
doc/guides/nics/features/ice.ini | 1 +
doc/guides/nics/features/ice_dcf.ini | 1 +
doc/guides/nics/features/ipn3ke.ini | 1 +
doc/guides/nics/features/ixgbe.ini | 1 +
doc/guides/nics/features/mvpp2.ini | 3 ++-
doc/guides/nics/features/txgbe.ini | 1 +
13 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f7d9980849..966b3e17d1 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -751,6 +751,19 @@ Supports congestion management.
``rte_eth_cman_config_set()``, ``rte_eth_cman_config_get()``.
+.. _nic_features_traffic_manager:
+
+Traffic manager
+---------------
+
+Supports Traffic manager.
+
+* **[implements] rte_tm_ops**: ``capabilities_get``, ``shaper_profile_add``,
+ ``hierarchy_commit`` and so on.
+* **[related] API**: ``rte_tm_capabilities_get()``, ``rte_tm_shaper_profile_add()``,
+ ``rte_tm_hierarchy_commit()`` and so on.
+
+
.. _nic_features_fw_version:
FW version
diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini
index ac7de9a0f0..f85813ab52 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -28,6 +28,7 @@ RSS key update = Y
RSS reta update = Y
Inner RSS = Y
Congestion management = Y
+Traffic manager = Y
Inline protocol = Y
Flow control = Y
Scattered Rx = Y
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 806cb033ff..64ee0f8c2f 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -42,6 +42,7 @@ VLAN filter =
Flow control =
Rate limitation =
Congestion management =
+Traffic manager =
Inline crypto =
Inline protocol =
CRC offload =
diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
index 26dc8c2178..f02da463d9 100644
--- a/doc/guides/nics/features/dpaa2.ini
+++ b/doc/guides/nics/features/dpaa2.ini
@@ -17,6 +17,7 @@ Unicast MAC filter = Y
RSS hash = Y
VLAN filter = Y
Flow control = Y
+Traffic manager = Y
VLAN offload = Y
L3 checksum offload = Y
L4 checksum offload = Y
diff --git a/doc/guides/nics/features/hns3.ini b/doc/guides/nics/features/hns3.ini
index 338b4e6864..a20ece20e8 100644
--- a/doc/guides/nics/features/hns3.ini
+++ b/doc/guides/nics/features/hns3.ini
@@ -28,6 +28,7 @@ RSS reta update = Y
DCB = Y
VLAN filter = Y
Flow control = Y
+Traffic manager = Y
CRC offload = Y
VLAN offload = Y
FEC = Y
diff --git a/doc/guides/nics/features/i40e.ini b/doc/guides/nics/features/i40e.ini
index e241dad047..2d168199f0 100644
--- a/doc/guides/nics/features/i40e.ini
+++ b/doc/guides/nics/features/i40e.ini
@@ -27,6 +27,7 @@ SR-IOV = Y
DCB = Y
VLAN filter = Y
Flow control = Y
+Traffic manager = Y
CRC offload = Y
VLAN offload = Y
QinQ offload = P
diff --git a/doc/guides/nics/features/iavf.ini b/doc/guides/nics/features/iavf.ini
index db4f92ce71..c59115ae15 100644
--- a/doc/guides/nics/features/iavf.ini
+++ b/doc/guides/nics/features/iavf.ini
@@ -25,6 +25,8 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
VLAN filter = Y
+Traffic manager = Y
+Inline crypto = Y
CRC offload = Y
VLAN offload = P
L3 checksum offload = Y
@@ -35,7 +37,6 @@ Inner L4 checksum = Y
Packet type parsing = Y
Rx descriptor status = Y
Tx descriptor status = Y
-Inline crypto = Y
Basic stats = Y
Multiprocess aware = Y
FreeBSD = Y
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 13f8871dcc..8febbc4f1e 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -26,6 +26,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
VLAN filter = Y
+Traffic manager = Y
CRC offload = Y
VLAN offload = Y
QinQ offload = P
diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
index 3b11622d4c..0e86338990 100644
--- a/doc/guides/nics/features/ice_dcf.ini
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -22,6 +22,7 @@ Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
VLAN filter = Y
+Traffic manager = Y
VLAN offload = Y
Extended stats = Y
Basic stats = Y
diff --git a/doc/guides/nics/features/ipn3ke.ini b/doc/guides/nics/features/ipn3ke.ini
index 1f6b780273..e412978820 100644
--- a/doc/guides/nics/features/ipn3ke.ini
+++ b/doc/guides/nics/features/ipn3ke.ini
@@ -25,6 +25,7 @@ SR-IOV = Y
DCB = Y
VLAN filter = Y
Flow control = Y
+Traffic manager = Y
CRC offload = Y
VLAN offload = Y
QinQ offload = Y
diff --git a/doc/guides/nics/features/ixgbe.ini b/doc/guides/nics/features/ixgbe.ini
index 8590ac857f..f05fcec455 100644
--- a/doc/guides/nics/features/ixgbe.ini
+++ b/doc/guides/nics/features/ixgbe.ini
@@ -27,6 +27,7 @@ DCB = Y
VLAN filter = Y
Flow control = Y
Rate limitation = Y
+Traffic manager = Y
Inline crypto = Y
CRC offload = P
VLAN offload = P
diff --git a/doc/guides/nics/features/mvpp2.ini b/doc/guides/nics/features/mvpp2.ini
index 653c9d08cb..ccc2c2d4f8 100644
--- a/doc/guides/nics/features/mvpp2.ini
+++ b/doc/guides/nics/features/mvpp2.ini
@@ -12,8 +12,9 @@ Allmulticast mode = Y
Unicast MAC filter = Y
Multicast MAC filter = Y
RSS hash = Y
-Flow control = Y
VLAN filter = Y
+Flow control = Y
+Traffic manager = Y
CRC offload = Y
L3 checksum offload = Y
L4 checksum offload = Y
diff --git a/doc/guides/nics/features/txgbe.ini b/doc/guides/nics/features/txgbe.ini
index e21083052c..3a11fb2037 100644
--- a/doc/guides/nics/features/txgbe.ini
+++ b/doc/guides/nics/features/txgbe.ini
@@ -26,6 +26,7 @@ DCB = Y
VLAN filter = Y
Flow control = Y
Rate limitation = Y
+Traffic manager = Y
Inline crypto = Y
CRC offload = P
VLAN offload = P
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.384903914 +0800
+++ 0078-doc-add-traffic-manager-in-features-table.patch 2024-04-13 20:43:05.027753892 +0800
@@ -1 +1 @@
-From a905e821007e884355e01289bdc67c3548591fec Mon Sep 17 00:00:00 2001
+From 166c5df81008cb833c577cc868dfad6bc79b155c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a905e821007e884355e01289bdc67c3548591fec ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -33 +35 @@
-index 38eba8742c..b4f7f1ee61 100644
+index f7d9980849..966b3e17d1 100644
@@ -36 +38 @@
-@@ -762,6 +762,19 @@ Supports congestion management.
+@@ -751,6 +751,19 @@ Supports congestion management.
@@ -57 +59 @@
-index 1c8db1ad13..2de156c695 100644
+index ac7de9a0f0..f85813ab52 100644
@@ -69 +71 @@
-index e21725bdea..c06b6fea9a 100644
+index 806cb033ff..64ee0f8c2f 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix async indirect action list creation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (76 preceding siblings ...)
2024-04-13 12:49 ` patch 'doc: add traffic manager in features table' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'doc: add link speeds configuration in features table' " Xueming Li
` (45 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=15952c71eb673ca8ce6acd754bd39ef367896423
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 15952c71eb673ca8ce6acd754bd39ef367896423 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 7 Mar 2024 12:27:11 +0200
Subject: [PATCH] app/testpmd: fix async indirect action list creation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c1496cb606875e7fda120a41435731fccff8f664 ]
Testpmd calls the same function to create legacy indirect action and
indirect list action.
The function did not identify required action correctly.
The patch adds the `indirect_list` boolean function parameter that is
derived from the action type.
Fixes: 72a3dec7126f ("ethdev: add indirect flow list action")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 1 +
app/test-pmd/config.c | 5 ++---
app/test-pmd/testpmd.h | 1 +
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5647add29a..b19b3205f0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -12609,6 +12609,7 @@ cmd_flow_parsed(const struct buffer *in)
port_queue_action_handle_create(
in->port, in->queue, in->postpone,
in->args.vc.attr.group,
+ in->command == QUEUE_INDIRECT_ACTION_LIST_CREATE,
&((const struct rte_flow_indir_action_conf) {
.ingress = in->args.vc.attr.ingress,
.egress = in->args.vc.attr.egress,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 7c24e401ec..6d605556ff 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2996,6 +2996,7 @@ port_queue_flow_update(portid_t port_id, queueid_t queue_id,
int
port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
bool postpone, uint32_t id,
+ bool indirect_list,
const struct rte_flow_indir_action_conf *conf,
const struct rte_flow_action *action)
{
@@ -3005,8 +3006,6 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
int ret;
struct rte_flow_error error;
struct queue_job *job;
- bool is_indirect_list = action[1].type != RTE_FLOW_ACTION_TYPE_END;
-
ret = action_alloc(port_id, id, &pia);
if (ret)
@@ -3028,7 +3027,7 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x88, sizeof(error));
- if (is_indirect_list)
+ if (indirect_list)
queue_action_list_handle_create(port_id, queue_id, pia, job,
&attr, conf, action, &error);
else
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9b10a9ea1c..5bb1a79330 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -996,6 +996,7 @@ int port_queue_flow_update(portid_t port_id, queueid_t queue_id,
const struct rte_flow_action *actions);
int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
bool postpone, uint32_t id,
+ bool indirect_list,
const struct rte_flow_indir_action_conf *conf,
const struct rte_flow_action *action);
int port_queue_action_handle_destroy(portid_t port_id,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.414113375 +0800
+++ 0079-app-testpmd-fix-async-indirect-action-list-creation.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From c1496cb606875e7fda120a41435731fccff8f664 Mon Sep 17 00:00:00 2001
+From 15952c71eb673ca8ce6acd754bd39ef367896423 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c1496cb606875e7fda120a41435731fccff8f664 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 5f761903c1..fd6c51f72d 100644
+index 5647add29a..b19b3205f0 100644
@@ -28 +30 @@
-@@ -13237,6 +13237,7 @@ cmd_flow_parsed(const struct buffer *in)
+@@ -12609,6 +12609,7 @@ cmd_flow_parsed(const struct buffer *in)
@@ -37 +39 @@
-index 968d2164ab..ba1007ace6 100644
+index 7c24e401ec..6d605556ff 100644
@@ -40 +42 @@
-@@ -3099,6 +3099,7 @@ port_queue_flow_update(portid_t port_id, queueid_t queue_id,
+@@ -2996,6 +2996,7 @@ port_queue_flow_update(portid_t port_id, queueid_t queue_id,
@@ -48 +50 @@
-@@ -3108,8 +3109,6 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+@@ -3005,8 +3006,6 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
@@ -57 +59 @@
-@@ -3131,7 +3130,7 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+@@ -3028,7 +3027,7 @@ port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
@@ -67 +69 @@
-index 55df12033a..0afae7d771 100644
+index 9b10a9ea1c..5bb1a79330 100644
@@ -70 +72 @@
-@@ -1002,6 +1002,7 @@ int port_queue_flow_update(portid_t port_id, queueid_t queue_id,
+@@ -996,6 +996,7 @@ int port_queue_flow_update(portid_t port_id, queueid_t queue_id,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: add link speeds configuration in features table' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (77 preceding siblings ...)
2024-04-13 12:49 ` patch 'app/testpmd: fix async indirect action list creation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/nfp: fix getting firmware VNIC version' " Xueming Li
` (44 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Huisong Li; +Cc: Chengwen Feng, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5853ebb3b959e581e6d41363e81fed80c5532f54
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5853ebb3b959e581e6d41363e81fed80c5532f54 Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Tue, 28 Nov 2023 21:00:05 +0800
Subject: [PATCH] doc: add link speeds configuration in features table
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 79758d40007c56231fce3cf1004ae5168616b3aa ]
Add features for link speeds.
Fixes: 82113036e4e5 ("ethdev: redesign link speed config")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
doc/guides/nics/features.rst | 11 +++++++++++
doc/guides/nics/features/atlantic.ini | 1 +
doc/guides/nics/features/bnxt.ini | 1 +
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/dpaa.ini | 1 +
doc/guides/nics/features/hns3.ini | 1 +
doc/guides/nics/features/i40e.ini | 1 +
doc/guides/nics/features/ice.ini | 1 +
doc/guides/nics/features/igb.ini | 1 +
doc/guides/nics/features/igc.ini | 1 +
doc/guides/nics/features/ionic.ini | 1 +
doc/guides/nics/features/ixgbe.ini | 1 +
doc/guides/nics/features/ngbe.ini | 1 +
doc/guides/nics/features/octeontx.ini | 1 +
doc/guides/nics/features/sfc.ini | 1 +
doc/guides/nics/features/thunderx.ini | 1 +
doc/guides/nics/features/txgbe.ini | 1 +
17 files changed, 27 insertions(+)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 966b3e17d1..cf9fabb8b8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -34,6 +34,17 @@ Supports getting the speed capabilities that the current device is capable of.
* **[related] API**: ``rte_eth_dev_info_get()``.
+.. _nic_features_link_speeds_config:
+
+Link speed configuration
+------------------------
+
+Supports configurating fixed speed and link autonegotiation.
+
+* **[uses] user config**: ``dev_conf.link_speeds:RTE_ETH_LINK_SPEED_*``.
+* **[related] API**: ``rte_eth_dev_configure()``.
+
+
.. _nic_features_link_status:
Link status
diff --git a/doc/guides/nics/features/atlantic.ini b/doc/guides/nics/features/atlantic.ini
index ef4155027c..29969c1493 100644
--- a/doc/guides/nics/features/atlantic.ini
+++ b/doc/guides/nics/features/atlantic.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Queue start/stop = Y
diff --git a/doc/guides/nics/features/bnxt.ini b/doc/guides/nics/features/bnxt.ini
index bd4e2295dc..c33889663d 100644
--- a/doc/guides/nics/features/bnxt.ini
+++ b/doc/guides/nics/features/bnxt.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 64ee0f8c2f..c30702c72e 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -8,6 +8,7 @@
;
[Features]
Speed capabilities =
+Link speed configuration =
Link status =
Link status event =
Removal event =
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index a382c7160c..b136ed191a 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Burst mode info = Y
diff --git a/doc/guides/nics/features/hns3.ini b/doc/guides/nics/features/hns3.ini
index a20ece20e8..8b623d3077 100644
--- a/doc/guides/nics/features/hns3.ini
+++ b/doc/guides/nics/features/hns3.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/i40e.ini b/doc/guides/nics/features/i40e.ini
index 2d168199f0..ef7514c44b 100644
--- a/doc/guides/nics/features/i40e.ini
+++ b/doc/guides/nics/features/i40e.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/ice.ini b/doc/guides/nics/features/ice.ini
index 8febbc4f1e..62869ef0a0 100644
--- a/doc/guides/nics/features/ice.ini
+++ b/doc/guides/nics/features/ice.ini
@@ -8,6 +8,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/igb.ini b/doc/guides/nics/features/igb.ini
index 7b4af6f86c..ee2408f3ee 100644
--- a/doc/guides/nics/features/igb.ini
+++ b/doc/guides/nics/features/igb.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = P
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/igc.ini b/doc/guides/nics/features/igc.ini
index 47d9344435..d6db18c1e8 100644
--- a/doc/guides/nics/features/igc.ini
+++ b/doc/guides/nics/features/igc.ini
@@ -4,6 +4,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
FW version = Y
diff --git a/doc/guides/nics/features/ionic.ini b/doc/guides/nics/features/ionic.ini
index af0fc5462a..64b2316288 100644
--- a/doc/guides/nics/features/ionic.ini
+++ b/doc/guides/nics/features/ionic.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Fast mbuf free = Y
diff --git a/doc/guides/nics/features/ixgbe.ini b/doc/guides/nics/features/ixgbe.ini
index f05fcec455..cb9331dbcd 100644
--- a/doc/guides/nics/features/ixgbe.ini
+++ b/doc/guides/nics/features/ixgbe.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 2701c5f051..1dfd92e96b 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Free Tx mbuf on demand = Y
diff --git a/doc/guides/nics/features/octeontx.ini b/doc/guides/nics/features/octeontx.ini
index fa1e18b120..46ae8318a9 100644
--- a/doc/guides/nics/features/octeontx.ini
+++ b/doc/guides/nics/features/octeontx.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Lock-free Tx queue = Y
diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini
index 8a9198adcb..f9654e69ed 100644
--- a/doc/guides/nics/features/sfc.ini
+++ b/doc/guides/nics/features/sfc.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
diff --git a/doc/guides/nics/features/thunderx.ini b/doc/guides/nics/features/thunderx.ini
index b33bb37c82..2ab8db7239 100644
--- a/doc/guides/nics/features/thunderx.ini
+++ b/doc/guides/nics/features/thunderx.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Queue start/stop = Y
diff --git a/doc/guides/nics/features/txgbe.ini b/doc/guides/nics/features/txgbe.ini
index 3a11fb2037..be0af3dfad 100644
--- a/doc/guides/nics/features/txgbe.ini
+++ b/doc/guides/nics/features/txgbe.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Link speed configuration = Y
Link status = Y
Link status event = Y
Rx interrupt = Y
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.453206124 +0800
+++ 0080-doc-add-link-speeds-configuration-in-features-table.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 79758d40007c56231fce3cf1004ae5168616b3aa Mon Sep 17 00:00:00 2001
+From 5853ebb3b959e581e6d41363e81fed80c5532f54 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 79758d40007c56231fce3cf1004ae5168616b3aa ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -35 +37 @@
-index b4f7f1ee61..cd0115ffb3 100644
+index 966b3e17d1..cf9fabb8b8 100644
@@ -81 +83 @@
-index c06b6fea9a..1e9a156a2a 100644
+index 64ee0f8c2f..c30702c72e 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/nfp: fix getting firmware VNIC version' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (78 preceding siblings ...)
2024-04-13 12:49 ` patch 'doc: add link speeds configuration in features table' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/nfp: fix IPsec data endianness' " Xueming Li
` (43 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Qin Ke; +Cc: Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bec3117648637d45e894bde55adea7c2be1a9851
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bec3117648637d45e894bde55adea7c2be1a9851 Mon Sep 17 00:00:00 2001
From: Qin Ke <qin.ke@corigine.com>
Date: Mon, 11 Mar 2024 09:54:14 +0800
Subject: [PATCH] net/nfp: fix getting firmware VNIC version
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e72d6053a945b71348f5874cca6cd0be89eed8e0 ]
When getting firmware VNIC version, the logic for representor
ports and other ports is inverse, fix it.
Fixes: c4de52eca76c ("net/nfp: remove redundancy for representor port")
Signed-off-by: Qin Ke <qin.ke@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/nfp_net_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 46d0e07850..10732f459c 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -2073,7 +2073,7 @@ nfp_net_firmware_version_get(struct rte_eth_dev *dev,
hw = nfp_net_get_hw(dev);
- if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) {
+ if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) == 0) {
snprintf(vnic_version, FW_VER_LEN, "%d.%d.%d.%d",
hw->ver.extend, hw->ver.class,
hw->ver.major, hw->ver.minor);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.484163284 +0800
+++ 0081-net-nfp-fix-getting-firmware-VNIC-version.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From e72d6053a945b71348f5874cca6cd0be89eed8e0 Mon Sep 17 00:00:00 2001
+From bec3117648637d45e894bde55adea7c2be1a9851 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e72d6053a945b71348f5874cca6cd0be89eed8e0 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 20e628bfd1..bd43925960 100644
+index 46d0e07850..10732f459c 100644
@@ -24 +26 @@
-@@ -2155,7 +2155,7 @@ nfp_net_firmware_version_get(struct rte_eth_dev *dev,
+@@ -2073,7 +2073,7 @@ nfp_net_firmware_version_get(struct rte_eth_dev *dev,
@@ -28,2 +30,2 @@
-- if (rte_eth_dev_is_repr(dev)) {
-+ if (!rte_eth_dev_is_repr(dev)) {
+- if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) {
++ if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) == 0) {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/nfp: fix IPsec data endianness' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (79 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/nfp: fix getting firmware VNIC version' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/ena: fix fast mbuf free' " Xueming Li
` (42 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shihong Wang; +Cc: Chaoyong He, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5f75adca7e2ed0a0b264d6abda40ea81595d1c04
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5f75adca7e2ed0a0b264d6abda40ea81595d1c04 Mon Sep 17 00:00:00 2001
From: Shihong Wang <shihong.wang@corigine.com>
Date: Mon, 11 Mar 2024 10:49:39 +0800
Subject: [PATCH] net/nfp: fix IPsec data endianness
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7e13f2dc603e406eaa099a0b45c099a9d9004dd0 ]
The algorithm key of the security framework is stored in the u8
array according to big-endian, and the driver algorithm key is
CPU-endian of u32, so it maybe need to convert the endianness order
to ensure that the value assigned to the driver is CPU-endian.
This patch removes the operation of converting IPsec Tx metadata
to big-endian to ensure that IPsec Tx metadata is CPU-endian.
Fixes: 547137405be7 ("net/nfp: initialize IPsec related content")
Fixes: 3d21da66c06b ("net/nfp: create security session")
Fixes: 310a1780581e ("net/nfp: support IPsec Rx and Tx offload")
Signed-off-by: Shihong Wang <shihong.wang@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
drivers/net/nfp/nfp_ipsec.c | 72 +++++++++++++++++++++++--------------
drivers/net/nfp/nfp_ipsec.h | 9 ++---
2 files changed, 47 insertions(+), 34 deletions(-)
diff --git a/drivers/net/nfp/nfp_ipsec.c b/drivers/net/nfp/nfp_ipsec.c
index 7ce9cca0b2..aebdbb2f48 100644
--- a/drivers/net/nfp/nfp_ipsec.c
+++ b/drivers/net/nfp/nfp_ipsec.c
@@ -18,6 +18,7 @@
#include "nfp_rxtx.h"
#define NFP_UDP_ESP_PORT 4500
+#define NFP_ESP_IV_LENGTH 8
static const struct rte_cryptodev_capabilities nfp_crypto_caps[] = {
{
@@ -521,7 +522,8 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
char *save;
char *iv_b;
char *iv_str;
- uint8_t *cfg_iv;
+ const rte_be32_t *iv_value;
+ uint8_t cfg_iv[NFP_ESP_IV_LENGTH];
iv_str = strdup(iv_string);
if (iv_str == NULL) {
@@ -529,8 +531,6 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
return;
}
- cfg_iv = (uint8_t *)cfg->aesgcm_fields.iv;
-
for (i = 0; i < iv_len; i++) {
iv_b = strtok_r(i ? NULL : iv_str, ",", &save);
if (iv_b == NULL)
@@ -539,8 +539,9 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
cfg_iv[i] = strtoul(iv_b, NULL, 0);
}
- *(uint32_t *)cfg_iv = rte_be_to_cpu_32(*(uint32_t *)cfg_iv);
- *(uint32_t *)&cfg_iv[4] = rte_be_to_cpu_32(*(uint32_t *)&cfg_iv[4]);
+ iv_value = (const rte_be32_t *)(cfg_iv);
+ cfg->aesgcm_fields.iv[0] = rte_be_to_cpu_32(iv_value[0]);
+ cfg->aesgcm_fields.iv[1] = rte_be_to_cpu_32(iv_value[1]);
free(iv_str);
}
@@ -581,7 +582,7 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
uint32_t offset;
uint32_t device_id;
const char *iv_str;
- const uint32_t *key;
+ const rte_be32_t *key;
struct nfp_net_hw *net_hw;
net_hw = eth_dev->data->dev_private;
@@ -631,7 +632,7 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- key = (const uint32_t *)(aead->key.data);
+ key = (const rte_be32_t *)(aead->key.data);
/*
* The CHACHA20's key order needs to be adjusted based on hardware design.
@@ -643,16 +644,22 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
for (i = 0; i < key_length / sizeof(cfg->cipher_key[0]); i++) {
index = (i + offset) % (key_length / sizeof(cfg->cipher_key[0]));
- cfg->cipher_key[index] = rte_cpu_to_be_32(*key++);
+ cfg->cipher_key[index] = rte_be_to_cpu_32(key[i]);
}
/*
- * The iv of the FW is equal to ESN by default. Reading the
- * iv of the configuration information is not supported.
+ * The iv of the FW is equal to ESN by default. Only the
+ * aead algorithm can offload the iv of configuration and
+ * the length of iv cannot be greater than NFP_ESP_IV_LENGTH.
*/
iv_str = getenv("ETH_SEC_IV_OVR");
if (iv_str != NULL) {
iv_len = aead->iv.length;
+ if (iv_len > NFP_ESP_IV_LENGTH) {
+ PMD_DRV_LOG(ERR, "Unsupported length of iv data");
+ return -EINVAL;
+ }
+
nfp_aesgcm_iv_update(cfg, iv_len, iv_str);
}
@@ -669,7 +676,7 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev,
int ret;
uint32_t i;
uint32_t device_id;
- const uint32_t *key;
+ const rte_be32_t *key;
struct nfp_net_hw *net_hw;
net_hw = eth_dev->data->dev_private;
@@ -703,14 +710,14 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- key = (const uint32_t *)(cipher->key.data);
+ key = (const rte_be32_t *)(cipher->key.data);
if (key_length > sizeof(cfg->cipher_key)) {
PMD_DRV_LOG(ERR, "Insufficient space for offloaded key");
return -EINVAL;
}
for (i = 0; i < key_length / sizeof(cfg->cipher_key[0]); i++)
- cfg->cipher_key[i] = rte_cpu_to_be_32(*key++);
+ cfg->cipher_key[i] = rte_be_to_cpu_32(key[i]);
return 0;
}
@@ -805,7 +812,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
uint32_t i;
uint8_t key_length;
uint32_t device_id;
- const uint32_t *key;
+ const rte_be32_t *key;
struct nfp_net_hw *net_hw;
if (digest_length == 0) {
@@ -852,7 +859,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
return -EINVAL;
}
- key = (const uint32_t *)(auth->key.data);
+ key = (const rte_be32_t *)(auth->key.data);
key_length = auth->key.length;
if (key_length > sizeof(cfg->auth_key)) {
PMD_DRV_LOG(ERR, "Insufficient space for offloaded auth key!");
@@ -860,7 +867,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
}
for (i = 0; i < key_length / sizeof(cfg->auth_key[0]); i++)
- cfg->auth_key[i] = rte_cpu_to_be_32(*key++);
+ cfg->auth_key[i] = rte_be_to_cpu_32(key[i]);
return 0;
}
@@ -900,7 +907,7 @@ nfp_crypto_msg_build(struct rte_eth_dev *eth_dev,
return ret;
}
- cfg->aesgcm_fields.salt = rte_cpu_to_be_32(conf->ipsec.salt);
+ cfg->aesgcm_fields.salt = conf->ipsec.salt;
break;
case RTE_CRYPTO_SYM_XFORM_AUTH:
/* Only support Auth + Cipher for inbound */
@@ -965,7 +972,10 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
struct rte_security_session_conf *conf,
struct nfp_ipsec_msg *msg)
{
+ int i;
int ret;
+ rte_be32_t *src_ip;
+ rte_be32_t *dst_ip;
struct ipsec_add_sa *cfg;
enum rte_security_ipsec_tunnel_type type;
@@ -1023,12 +1033,18 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
type = conf->ipsec.tunnel.type;
cfg->ctrl_word.mode = NFP_IPSEC_MODE_TUNNEL;
if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- cfg->src_ip.v4 = conf->ipsec.tunnel.ipv4.src_ip;
- cfg->dst_ip.v4 = conf->ipsec.tunnel.ipv4.dst_ip;
+ src_ip = (rte_be32_t *)&conf->ipsec.tunnel.ipv4.src_ip.s_addr;
+ dst_ip = (rte_be32_t *)&conf->ipsec.tunnel.ipv4.dst_ip.s_addr;
+ cfg->src_ip[0] = rte_be_to_cpu_32(src_ip[0]);
+ cfg->dst_ip[0] = rte_be_to_cpu_32(dst_ip[0]);
cfg->ipv6 = 0;
} else if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
- cfg->src_ip.v6 = conf->ipsec.tunnel.ipv6.src_addr;
- cfg->dst_ip.v6 = conf->ipsec.tunnel.ipv6.dst_addr;
+ src_ip = (rte_be32_t *)conf->ipsec.tunnel.ipv6.src_addr.s6_addr;
+ dst_ip = (rte_be32_t *)conf->ipsec.tunnel.ipv6.dst_addr.s6_addr;
+ for (i = 0; i < 4; i++) {
+ cfg->src_ip[i] = rte_be_to_cpu_32(src_ip[i]);
+ cfg->dst_ip[i] = rte_be_to_cpu_32(dst_ip[i]);
+ }
cfg->ipv6 = 1;
} else {
PMD_DRV_LOG(ERR, "Unsupported address family!");
@@ -1041,9 +1057,11 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
cfg->ctrl_word.mode = NFP_IPSEC_MODE_TRANSPORT;
if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
+ memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
cfg->ipv6 = 0;
} else if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
+ memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
cfg->ipv6 = 1;
} else {
PMD_DRV_LOG(ERR, "Unsupported address family!");
@@ -1177,18 +1195,18 @@ nfp_security_set_pkt_metadata(void *device,
desc_md = RTE_MBUF_DYNFIELD(m, offset, struct nfp_tx_ipsec_desc_msg *);
if (priv_session->msg.ctrl_word.ext_seq != 0 && sqn != NULL) {
- desc_md->esn.low = rte_cpu_to_be_32(*sqn);
- desc_md->esn.hi = rte_cpu_to_be_32(*sqn >> 32);
+ desc_md->esn.low = (uint32_t)*sqn;
+ desc_md->esn.hi = (uint32_t)(*sqn >> 32);
} else if (priv_session->msg.ctrl_word.ext_seq != 0) {
- desc_md->esn.low = rte_cpu_to_be_32(priv_session->ipsec.esn.low);
- desc_md->esn.hi = rte_cpu_to_be_32(priv_session->ipsec.esn.hi);
+ desc_md->esn.low = priv_session->ipsec.esn.low;
+ desc_md->esn.hi = priv_session->ipsec.esn.hi;
} else {
- desc_md->esn.low = rte_cpu_to_be_32(priv_session->ipsec.esn.value);
+ desc_md->esn.low = priv_session->ipsec.esn.low;
desc_md->esn.hi = 0;
}
desc_md->enc = 1;
- desc_md->sa_idx = rte_cpu_to_be_32(priv_session->sa_index);
+ desc_md->sa_idx = priv_session->sa_index;
}
return 0;
diff --git a/drivers/net/nfp/nfp_ipsec.h b/drivers/net/nfp/nfp_ipsec.h
index d7a729398a..f7c4f3f225 100644
--- a/drivers/net/nfp/nfp_ipsec.h
+++ b/drivers/net/nfp/nfp_ipsec.h
@@ -36,11 +36,6 @@ struct sa_ctrl_word {
uint32_t spare2 :1; /**< Must be set to 0 */
};
-union nfp_ip_addr {
- struct in6_addr v6;
- struct in_addr v4;
-};
-
struct ipsec_add_sa {
uint32_t cipher_key[8]; /**< Cipher Key */
union {
@@ -60,8 +55,8 @@ struct ipsec_add_sa {
uint8_t spare1;
uint32_t soft_byte_cnt; /**< Soft lifetime byte count */
uint32_t hard_byte_cnt; /**< Hard lifetime byte count */
- union nfp_ip_addr src_ip; /**< Src IP addr */
- union nfp_ip_addr dst_ip; /**< Dst IP addr */
+ uint32_t src_ip[4]; /**< Src IP addr */
+ uint32_t dst_ip[4]; /**< Dst IP addr */
uint16_t natt_dst_port; /**< NAT-T UDP Header dst port */
uint16_t natt_src_port; /**< NAT-T UDP Header src port */
uint32_t soft_lifetime_limit; /**< Soft lifetime time limit */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.508682352 +0800
+++ 0082-net-nfp-fix-IPsec-data-endianness.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 7e13f2dc603e406eaa099a0b45c099a9d9004dd0 Mon Sep 17 00:00:00 2001
+From 5f75adca7e2ed0a0b264d6abda40ea81595d1c04 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7e13f2dc603e406eaa099a0b45c099a9d9004dd0 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 0bf146b9be..205d1d594c 100644
+index 7ce9cca0b2..aebdbb2f48 100644
@@ -30,2 +32,2 @@
-@@ -21,6 +21,7 @@
- #include "nfp_net_meta.h"
+@@ -18,6 +18,7 @@
+ #include "nfp_rxtx.h"
@@ -38 +40 @@
-@@ -524,7 +525,8 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
+@@ -521,7 +522,8 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
@@ -48 +50 @@
-@@ -532,8 +534,6 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
+@@ -529,8 +531,6 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
@@ -57 +59 @@
-@@ -542,8 +542,9 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
+@@ -539,8 +539,9 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
@@ -69 +71 @@
-@@ -584,7 +585,7 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
+@@ -581,7 +582,7 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
@@ -78 +80 @@
-@@ -634,7 +635,7 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
+@@ -631,7 +632,7 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
@@ -87 +89 @@
-@@ -646,16 +647,22 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
+@@ -643,16 +644,22 @@ nfp_aead_map(struct rte_eth_dev *eth_dev,
@@ -113 +115 @@
-@@ -672,7 +679,7 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev,
+@@ -669,7 +676,7 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev,
@@ -122 +124 @@
-@@ -706,14 +713,14 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev,
+@@ -703,14 +710,14 @@ nfp_cipher_map(struct rte_eth_dev *eth_dev,
@@ -139 +141 @@
-@@ -808,7 +815,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
+@@ -805,7 +812,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
@@ -148 +150 @@
-@@ -855,7 +862,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
+@@ -852,7 +859,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
@@ -157 +159 @@
-@@ -863,7 +870,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
+@@ -860,7 +867,7 @@ nfp_auth_map(struct rte_eth_dev *eth_dev,
@@ -166 +168 @@
-@@ -903,7 +910,7 @@ nfp_crypto_msg_build(struct rte_eth_dev *eth_dev,
+@@ -900,7 +907,7 @@ nfp_crypto_msg_build(struct rte_eth_dev *eth_dev,
@@ -175 +177 @@
-@@ -968,7 +975,10 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
+@@ -965,7 +972,10 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
@@ -186 +188 @@
-@@ -1026,12 +1036,18 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
+@@ -1023,12 +1033,18 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
@@ -209 +211 @@
-@@ -1044,9 +1060,11 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
+@@ -1041,9 +1057,11 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
@@ -221 +223 @@
-@@ -1180,18 +1198,18 @@ nfp_security_set_pkt_metadata(void *device,
+@@ -1177,18 +1195,18 @@ nfp_security_set_pkt_metadata(void *device,
@@ -247 +249 @@
-index 4ef0e196be..8fdb7fd534 100644
+index d7a729398a..f7c4f3f225 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ena: fix fast mbuf free' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (80 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/nfp: fix IPsec data endianness' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/ena/base: limit exponential backoff' " Xueming Li
` (41 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shai Brandes; +Cc: Amit Bernstein, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2fa8497bd30eb130c45d8e6f3538f8e50b4102c2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2fa8497bd30eb130c45d8e6f3538f8e50b4102c2 Mon Sep 17 00:00:00 2001
From: Shai Brandes <shaibran@amazon.com>
Date: Tue, 12 Mar 2024 20:06:50 +0200
Subject: [PATCH] net/ena: fix fast mbuf free
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 89b081e154c5f81295014fd8b5b32193dd01b9cb ]
In case the application enables fast mbuf release optimization,
the driver releases 256 TX mbufs in bulk upon reaching the
TX free threshold.
The existing implementation utilizes rte_mempool_put_bulk for bulk
freeing TXs, which exclusively supports direct mbufs.
In case the application transmits indirect bufs, the driver must
also decrement the mbuf reference count and unlink the mbuf segment.
For such case, the driver should employ rte_pktmbuf_free_bulk.
Fixes: c339f53823f3 ("net/ena: support fast mbuf free")
Signed-off-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
---
drivers/net/ena/ena_ethdev.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index dc846d2e84..1e138849cc 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -3117,8 +3117,7 @@ ena_tx_cleanup_mbuf_fast(struct rte_mbuf **mbufs_to_clean,
m_next = mbuf->next;
mbufs_to_clean[mbuf_cnt++] = mbuf;
if (mbuf_cnt == buf_size) {
- rte_mempool_put_bulk(mbufs_to_clean[0]->pool, (void **)mbufs_to_clean,
- (unsigned int)mbuf_cnt);
+ rte_pktmbuf_free_bulk(mbufs_to_clean, mbuf_cnt);
mbuf_cnt = 0;
}
mbuf = m_next;
@@ -3186,8 +3185,7 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
}
if (mbuf_cnt != 0)
- rte_mempool_put_bulk(mbufs_to_clean[0]->pool,
- (void **)mbufs_to_clean, mbuf_cnt);
+ rte_pktmbuf_free_bulk(mbufs_to_clean, mbuf_cnt);
/* Notify completion handler that full cleanup was performed */
if (free_pkt_cnt == 0 || total_tx_pkts < cleanup_budget)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.531356222 +0800
+++ 0083-net-ena-fix-fast-mbuf-free.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 89b081e154c5f81295014fd8b5b32193dd01b9cb Mon Sep 17 00:00:00 2001
+From 2fa8497bd30eb130c45d8e6f3538f8e50b4102c2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 89b081e154c5f81295014fd8b5b32193dd01b9cb ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 3157237c0d..537ee9f8c3 100644
+index dc846d2e84..1e138849cc 100644
@@ -28 +30 @@
-@@ -3122,8 +3122,7 @@ ena_tx_cleanup_mbuf_fast(struct rte_mbuf **mbufs_to_clean,
+@@ -3117,8 +3117,7 @@ ena_tx_cleanup_mbuf_fast(struct rte_mbuf **mbufs_to_clean,
@@ -38 +40 @@
-@@ -3191,8 +3190,7 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
+@@ -3186,8 +3185,7 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ena/base: limit exponential backoff' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (81 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/ena: fix fast mbuf free' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/ena/base: restructure interrupt handling' " Xueming Li
` (40 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shai Brandes; +Cc: Amit Bernstein, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e1abac3de0eed7755edee3400722db6df5395abe
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e1abac3de0eed7755edee3400722db6df5395abe Mon Sep 17 00:00:00 2001
From: Shai Brandes <shaibran@amazon.com>
Date: Tue, 12 Mar 2024 20:06:52 +0200
Subject: [PATCH] net/ena/base: limit exponential backoff
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4b378679c88ff231b97b71aaedc2d424e82f1aba ]
Limit the value of the exponent used for this backoff
at (1<<16) to prevent it from reaching to an excessive
value (1<<32) or potentially even overflowing.
In addition, for uniformity and readability purposes,
the min/max parameter in the calls of ENA_MIN32 and
ENA_MAX32 macros was changed to be first.
Fixes: 0c84e04824db ("net/ena/base: make delay exponential in polling functions")
Signed-off-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
---
drivers/net/ena/base/ena_com.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index 6953a1fa33..31c37b0ab3 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -34,6 +34,8 @@
#define ENA_REGS_ADMIN_INTR_MASK 1
+#define ENA_MAX_BACKOFF_DELAY_EXP 16U
+
#define ENA_MIN_ADMIN_POLL_US 100
#define ENA_MAX_ADMIN_POLL_US 5000
@@ -545,8 +547,9 @@ static int ena_com_comp_status_to_errno(struct ena_com_admin_queue *admin_queue,
static void ena_delay_exponential_backoff_us(u32 exp, u32 delay_us)
{
+ exp = ENA_MIN32(ENA_MAX_BACKOFF_DELAY_EXP, exp);
delay_us = ENA_MAX32(ENA_MIN_ADMIN_POLL_US, delay_us);
- delay_us = ENA_MIN32(delay_us * (1U << exp), ENA_MAX_ADMIN_POLL_US);
+ delay_us = ENA_MIN32(ENA_MAX_ADMIN_POLL_US, delay_us * (1U << exp));
ENA_USLEEP(delay_us);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.564358479 +0800
+++ 0084-net-ena-base-limit-exponential-backoff.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 4b378679c88ff231b97b71aaedc2d424e82f1aba Mon Sep 17 00:00:00 2001
+From e1abac3de0eed7755edee3400722db6df5395abe Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4b378679c88ff231b97b71aaedc2d424e82f1aba ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ena/base: restructure interrupt handling' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (82 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/ena/base: limit exponential backoff' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/nfp: fix switch domain free check' " Xueming Li
` (39 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shai Brandes; +Cc: Amit Bernstein, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=aa850bad00f408ad4d680023dd8c952e062606b1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From aa850bad00f408ad4d680023dd8c952e062606b1 Mon Sep 17 00:00:00 2001
From: Shai Brandes <shaibran@amazon.com>
Date: Tue, 12 Mar 2024 20:07:00 +0200
Subject: [PATCH] net/ena/base: restructure interrupt handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 553653ccc18c1ed9fb0406e1b0130e945d5ab30f ]
When invoking an admin command, in interrupt mode, if the interrupt
is received after timeout and also after the calling function finished
running, the response will be written into a memory that is no longer
valid.
Fixes: 99ecfbf845b3 ("ena: import communication layer")
Signed-off-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
---
drivers/net/ena/base/ena_com.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index 31c37b0ab3..57ccde9545 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -179,6 +179,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev,
static void comp_ctxt_release(struct ena_com_admin_queue *queue,
struct ena_comp_ctx *comp_ctx)
{
+ comp_ctx->user_cqe = NULL;
comp_ctx->occupied = false;
ATOMIC32_DEC(&queue->outstanding_cmds);
}
@@ -472,6 +473,9 @@ static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *a
return;
}
+ if (!comp_ctx->occupied)
+ return;
+
comp_ctx->status = ENA_CMD_COMPLETED;
comp_ctx->comp_status = cqe->acq_common_descriptor.status;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.600492132 +0800
+++ 0085-net-ena-base-restructure-interrupt-handling.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 553653ccc18c1ed9fb0406e1b0130e945d5ab30f Mon Sep 17 00:00:00 2001
+From aa850bad00f408ad4d680023dd8c952e062606b1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 553653ccc18c1ed9fb0406e1b0130e945d5ab30f ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index fb3ad27d0a..a0c88b1a0e 100644
+index 31c37b0ab3..57ccde9545 100644
@@ -24 +26 @@
-@@ -181,6 +181,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev,
+@@ -179,6 +179,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev,
@@ -32 +34 @@
-@@ -474,6 +475,9 @@ static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *a
+@@ -472,6 +473,9 @@ static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *a
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/nfp: fix switch domain free check' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (83 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/ena/base: restructure interrupt handling' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/nfp: fix initialization failure flow' " Xueming Li
` (38 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Chaoyong He; +Cc: Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=dd48153b15f1b6113ed16f14061e891b556404ea
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From dd48153b15f1b6113ed16f14061e891b556404ea Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Thu, 14 Mar 2024 15:40:17 +0800
Subject: [PATCH] net/nfp: fix switch domain free check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5b1b9f9c11bedb672cdf01c834d6c341d50a0f1e ]
CI found calling 'rte_eth_switch_domain_free()' without checking return
value.
Coverity issue: 414936
Fixes: 20eaa8e2ebae ("net/nfp: free switch domain ID on close")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/flower/nfp_flower.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 326d36c3a9..ecf7a1e576 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -814,7 +814,8 @@ nfp_uninit_app_fw_flower(struct nfp_pf_dev *pf_dev)
rte_free(app_fw_flower->pf_hw);
nfp_mtr_priv_uninit(pf_dev);
nfp_flow_priv_uninit(pf_dev);
- rte_eth_switch_domain_free(app_fw_flower->switch_domain_id);
+ if (rte_eth_switch_domain_free(app_fw_flower->switch_domain_id) != 0)
+ PMD_DRV_LOG(WARNING, "Failed to free switch domain for device");
rte_free(app_fw_flower);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.640225280 +0800
+++ 0086-net-nfp-fix-switch-domain-free-check.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 5b1b9f9c11bedb672cdf01c834d6c341d50a0f1e Mon Sep 17 00:00:00 2001
+From dd48153b15f1b6113ed16f14061e891b556404ea Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5b1b9f9c11bedb672cdf01c834d6c341d50a0f1e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 97219ff379..303f6bd3f6 100644
+index 326d36c3a9..ecf7a1e576 100644
@@ -24 +26 @@
-@@ -793,7 +793,8 @@ nfp_uninit_app_fw_flower(struct nfp_pf_dev *pf_dev)
+@@ -814,7 +814,8 @@ nfp_uninit_app_fw_flower(struct nfp_pf_dev *pf_dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/nfp: fix initialization failure flow' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (84 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/nfp: fix switch domain free check' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'app/testpmd: fix --stats-period option check' " Xueming Li
` (37 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Chaoyong He; +Cc: Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0884b3bd36a9880955f5b7d9a59c5a71619e3c0f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0884b3bd36a9880955f5b7d9a59c5a71619e3c0f Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Thu, 14 Mar 2024 15:40:19 +0800
Subject: [PATCH] net/nfp: fix initialization failure flow
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fdaa8f98d754a704e00c7738c177efd43b669081 ]
CI found there has unreachable control flow, fix it by remove the
missing delete 'return' statement.
Coverity issue: 414938
Fixes: 369945667251 ("net/nfp: fix resource leak for device initialization")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/nfp_ethdev.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 5e473d9c16..7baacd18b0 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -669,7 +669,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
err = nfp_net_tlv_caps_parse(eth_dev);
if (err != 0) {
PMD_INIT_LOG(ERR, "Failed to parser TLV caps");
- return err;
goto free_area;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.672265038 +0800
+++ 0087-net-nfp-fix-initialization-failure-flow.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From fdaa8f98d754a704e00c7738c177efd43b669081 Mon Sep 17 00:00:00 2001
+From 0884b3bd36a9880955f5b7d9a59c5a71619e3c0f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fdaa8f98d754a704e00c7738c177efd43b669081 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 31c54a595c..568de1d024 100644
+index 5e473d9c16..7baacd18b0 100644
@@ -24 +26 @@
-@@ -936,7 +936,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
+@@ -669,7 +669,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix --stats-period option check' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (85 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/nfp: fix initialization failure flow' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'app/testpmd: fix burst option parsing' " Xueming Li
` (36 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: David Marchand; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6c2174ad80afb5de9602d4355658723f151df5a6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6c2174ad80afb5de9602d4355658723f151df5a6 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Thu, 14 Mar 2024 10:17:02 +0100
Subject: [PATCH] app/testpmd: fix --stats-period option check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2ae06c579ea4e1cd234c5b14bef2957ee154ec74 ]
Rather than silently ignore an invalid value, raise an error for
stats-period user input.
Fixes: cfea1f3048d1 ("app/testpmd: print statistics periodically")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/parameters.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 11b0cce577..d715750bb8 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -776,7 +776,7 @@ launch_args_parse(int argc, char** argv)
n = strtoul(optarg, &end, 10);
if ((optarg[0] == '\0') || (end == NULL) ||
(*end != '\0'))
- break;
+ rte_exit(EXIT_FAILURE, "Invalid stats-period value\n");
stats_period = n;
break;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.702680399 +0800
+++ 0088-app-testpmd-fix-stats-period-option-check.patch 2024-04-13 20:43:05.037753879 +0800
@@ -1 +1 @@
-From 2ae06c579ea4e1cd234c5b14bef2957ee154ec74 Mon Sep 17 00:00:00 2001
+From 6c2174ad80afb5de9602d4355658723f151df5a6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2ae06c579ea4e1cd234c5b14bef2957ee154ec74 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix burst option parsing' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (86 preceding siblings ...)
2024-04-13 12:49 ` patch 'app/testpmd: fix --stats-period option check' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'app/testpmd: fix error message for invalid option' " Xueming Li
` (35 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: David Marchand; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=92c08367ea185939f79ec9b562e7e40b1d92e352
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 92c08367ea185939f79ec9b562e7e40b1d92e352 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Thu, 14 Mar 2024 10:17:03 +0100
Subject: [PATCH] app/testpmd: fix burst option parsing
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6fa896ae804a7dbd7fb766643f733dbad12bba43 ]
rte_eth_dev_info_get() is not supposed to fail for a valid port_id, but
for the theoretical case when it would fail, raise an error rather than
skip subsequent options.
Fixes: 6f51deb903b2 ("app/testpmd: check status of getting ethdev info")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/parameters.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index d715750bb8..3414a0d38c 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1128,7 +1128,9 @@ launch_args_parse(int argc, char** argv)
0,
&dev_info);
if (ret != 0)
- return;
+ rte_exit(EXIT_FAILURE, "Failed to get driver "
+ "recommended burst size, please provide a "
+ "value between 1 and %d\n", MAX_PKT_BURST);
rec_nb_pkts = dev_info
.default_rxportconf.burst_size;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.730374062 +0800
+++ 0089-app-testpmd-fix-burst-option-parsing.patch 2024-04-13 20:43:05.047753866 +0800
@@ -1 +1 @@
-From 6fa896ae804a7dbd7fb766643f733dbad12bba43 Mon Sep 17 00:00:00 2001
+From 92c08367ea185939f79ec9b562e7e40b1d92e352 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6fa896ae804a7dbd7fb766643f733dbad12bba43 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix error message for invalid option' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (87 preceding siblings ...)
2024-04-13 12:49 ` patch 'app/testpmd: fix burst option parsing' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/hns3: support new device' " Xueming Li
` (34 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: David Marchand; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=97089aa02e6cc724601cae527ff8ddb3ca0479e3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 97089aa02e6cc724601cae527ff8ddb3ca0479e3 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Thu, 14 Mar 2024 10:17:04 +0100
Subject: [PATCH] app/testpmd: fix error message for invalid option
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c25382fa81533429d5968b47b356c79153b76212 ]
"""
The variable optind is the index of the next element to be processed in
argv. The system initializes this value to 1. The caller can reset it
to 1 to restart scanning of the same argv, or when scanning a new
argument vector.
"""
Hence, if an invalid option is passed through testpmd cmdline, getopt
returns '?' and increments optind to the next index in argv for a
subsequent call.
The message should log the previous index.
Fixes: 8fad2e5ab2c5 ("app/testpmd: report invalid command line parameter")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/parameters.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3414a0d38c..a4c09e2a2b 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1497,7 +1497,7 @@ launch_args_parse(int argc, char** argv)
break;
default:
usage(argv[0]);
- fprintf(stderr, "Invalid option: %s\n", argv[optind]);
+ fprintf(stderr, "Invalid option: %s\n", argv[optind - 1]);
rte_exit(EXIT_FAILURE,
"Command line is incomplete or incorrect\n");
break;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.759182325 +0800
+++ 0090-app-testpmd-fix-error-message-for-invalid-option.patch 2024-04-13 20:43:05.047753866 +0800
@@ -1 +1 @@
-From c25382fa81533429d5968b47b356c79153b76212 Mon Sep 17 00:00:00 2001
+From 97089aa02e6cc724601cae527ff8ddb3ca0479e3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c25382fa81533429d5968b47b356c79153b76212 ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/hns3: support new device' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (88 preceding siblings ...)
2024-04-13 12:49 ` patch 'app/testpmd: fix error message for invalid option' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix HWS meter actions availability' " Xueming Li
` (33 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Jie Hai; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fe697bbce32c5ec34bac71e31407108d3190da12
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fe697bbce32c5ec34bac71e31407108d3190da12 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Fri, 15 Mar 2024 10:54:48 +0800
Subject: [PATCH] net/hns3: support new device
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3f1436d7006c2659232305ef2b8b186796319041 ]
This patch introduces the new devices, which are on-chip network
interface controllers with RDMA/DCB/ROH supporting. One is 100GE
and the other is 200GE. Both can be found on HIP09/HIP10 SoCs.
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
doc/guides/nics/hns3.rst | 2 +-
drivers/net/hns3/hns3_cmd.c | 4 +++-
drivers/net/hns3/hns3_ethdev.c | 2 ++
drivers/net/hns3/hns3_ethdev.h | 2 ++
4 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/hns3.rst b/doc/guides/nics/hns3.rst
index 3b0613fc1b..3e84d1ff1c 100644
--- a/doc/guides/nics/hns3.rst
+++ b/doc/guides/nics/hns3.rst
@@ -6,7 +6,7 @@ HNS3 Poll Mode Driver
The hns3 PMD (**librte_net_hns3**) provides poll mode driver support
for the inbuilt HiSilicon Network Subsystem(HNS) network engine
-found in the HiSilicon Kunpeng 920 SoC and Kunpeng 930 SoC .
+found in the HiSilicon Kunpeng 920 SoC (HIP08) and Kunpeng 930 SoC (HIP09/HIP10).
Features
--------
diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 2c1664485b..001ff49b36 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -545,7 +545,9 @@ hns3_set_dcb_capability(struct hns3_hw *hw)
if (device_id == HNS3_DEV_ID_25GE_RDMA ||
device_id == HNS3_DEV_ID_50GE_RDMA ||
device_id == HNS3_DEV_ID_100G_RDMA_MACSEC ||
- device_id == HNS3_DEV_ID_200G_RDMA)
+ device_id == HNS3_DEV_ID_200G_RDMA ||
+ device_id == HNS3_DEV_ID_100G_ROH ||
+ device_id == HNS3_DEV_ID_200G_ROH)
hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_DCB_B, 1);
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index eafcf2c6f6..90dbc4a84b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -6648,6 +6648,8 @@ static const struct rte_pci_id pci_id_hns3_map[] = {
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_50GE_RDMA) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_RDMA_MACSEC) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_200G_RDMA) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_ROH) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_200G_ROH) },
{ .vendor_id = 0, }, /* sentinel */
};
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 12d8299def..e70c5fff2a 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -28,7 +28,9 @@
#define HNS3_DEV_ID_25GE_RDMA 0xA222
#define HNS3_DEV_ID_50GE_RDMA 0xA224
#define HNS3_DEV_ID_100G_RDMA_MACSEC 0xA226
+#define HNS3_DEV_ID_100G_ROH 0xA227
#define HNS3_DEV_ID_200G_RDMA 0xA228
+#define HNS3_DEV_ID_200G_ROH 0xA22C
#define HNS3_DEV_ID_100G_VF 0xA22E
#define HNS3_DEV_ID_100G_RDMA_PFC_VF 0xA22F
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.785417390 +0800
+++ 0091-net-hns3-support-new-device.patch 2024-04-13 20:43:05.047753866 +0800
@@ -1 +1 @@
-From 3f1436d7006c2659232305ef2b8b186796319041 Mon Sep 17 00:00:00 2001
+From fe697bbce32c5ec34bac71e31407108d3190da12 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3f1436d7006c2659232305ef2b8b186796319041 ]
@@ -10,2 +12,0 @@
-Cc: stable@dpdk.org
-
@@ -14,6 +15,5 @@
- doc/guides/nics/hns3.rst | 2 +-
- doc/guides/rel_notes/release_24_03.rst | 4 ++++
- drivers/net/hns3/hns3_cmd.c | 4 +++-
- drivers/net/hns3/hns3_ethdev.c | 2 ++
- drivers/net/hns3/hns3_ethdev.h | 2 ++
- 5 files changed, 12 insertions(+), 2 deletions(-)
+ doc/guides/nics/hns3.rst | 2 +-
+ drivers/net/hns3/hns3_cmd.c | 4 +++-
+ drivers/net/hns3/hns3_ethdev.c | 2 ++
+ drivers/net/hns3/hns3_ethdev.h | 2 ++
+ 4 files changed, 8 insertions(+), 2 deletions(-)
@@ -34,15 +33,0 @@
-diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
-index 8e809456aa..14826ea08f 100644
---- a/doc/guides/rel_notes/release_24_03.rst
-+++ b/doc/guides/rel_notes/release_24_03.rst
-@@ -123,6 +123,10 @@ New Features
-
- * Added support for 5760X device family.
-
-+* **Updated HiSilicon hns3 ethdev driver.**
-+
-+ * Added new device supporting RDMA/DCB/ROH with PCI IDs: ``0xa227, 0xa22c``.
-+
- * **Updated Marvell cnxk net driver.**
-
- * Added support for port representors.
@@ -65 +50 @@
-index b10d1216d2..9730b9a7e9 100644
+index eafcf2c6f6..90dbc4a84b 100644
@@ -68 +53 @@
-@@ -6649,6 +6649,8 @@ static const struct rte_pci_id pci_id_hns3_map[] = {
+@@ -6648,6 +6648,8 @@ static const struct rte_pci_id pci_id_hns3_map[] = {
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix HWS meter actions availability' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (89 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/hns3: support new device' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix sync meter processing in HWS' " Xueming Li
` (32 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=50eb03f8d3302810ac2d966cff17dd1be6549bd7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 50eb03f8d3302810ac2d966cff17dd1be6549bd7 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 7 Mar 2024 12:19:08 +0200
Subject: [PATCH] net/mlx5: fix HWS meter actions availability
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7576c32eefd3b8444cff7f946361fb5c074634a7 ]
Allow compilation of HWS meter code only on platforms
that support HWS.
Fixes: 24865366e495 ("net/mlx5: support flow meter action for HWS")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_meter.c | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 7cbf772ea4..beeb868c8c 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -618,6 +618,7 @@ mlx5_flow_meter_profile_get(struct rte_eth_dev *dev,
meter_profile_id);
}
+#if defined(HAVE_MLX5_HWS_SUPPORT)
/**
* Callback to add MTR profile with HWS.
*
@@ -697,6 +698,7 @@ mlx5_flow_meter_profile_hws_delete(struct rte_eth_dev *dev,
memset(fmp, 0, sizeof(struct mlx5_flow_meter_profile));
return 0;
}
+#endif
/**
* Find policy by id.
@@ -839,6 +841,7 @@ mlx5_flow_meter_policy_validate(struct rte_eth_dev *dev,
return 0;
}
+#if defined(HAVE_MLX5_HWS_SUPPORT)
/**
* Callback to check MTR policy action validate for HWS
*
@@ -875,6 +878,7 @@ mlx5_flow_meter_policy_hws_validate(struct rte_eth_dev *dev,
}
return 0;
}
+#endif
static int
__mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev,
@@ -1201,6 +1205,7 @@ mlx5_flow_meter_policy_get(struct rte_eth_dev *dev,
&policy_idx);
}
+#if defined(HAVE_MLX5_HWS_SUPPORT)
/**
* Callback to delete MTR policy for HWS.
*
@@ -1523,7 +1528,7 @@ policy_add_err:
RTE_MTR_ERROR_TYPE_UNSPECIFIED,
NULL, "Failed to create meter policy.");
}
-
+#endif
/**
* Check meter validation.
*
@@ -1893,6 +1898,7 @@ error:
NULL, "Failed to create devx meter.");
}
+#if defined(HAVE_MLX5_HWS_SUPPORT)
/**
* Create meter rules.
*
@@ -1976,6 +1982,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
__atomic_fetch_add(&policy->ref_cnt, 1, __ATOMIC_RELAXED);
return 0;
}
+#endif
static int
mlx5_flow_meter_params_flush(struct rte_eth_dev *dev,
@@ -2460,6 +2467,7 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = {
.stats_read = mlx5_flow_meter_stats_read,
};
+#if defined(HAVE_MLX5_HWS_SUPPORT)
static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
.capabilities_get = mlx5_flow_mtr_cap_get,
.meter_profile_add = mlx5_flow_meter_profile_hws_add,
@@ -2478,6 +2486,7 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
.stats_update = NULL,
.stats_read = NULL,
};
+#endif
/**
* Get meter operations.
@@ -2493,12 +2502,16 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
int
mlx5_flow_meter_ops_get(struct rte_eth_dev *dev __rte_unused, void *arg)
{
+#if defined(HAVE_MLX5_HWS_SUPPORT)
struct mlx5_priv *priv = dev->data->dev_private;
if (priv->sh->config.dv_flow_en == 2)
*(const struct rte_mtr_ops **)arg = &mlx5_flow_mtr_hws_ops;
else
*(const struct rte_mtr_ops **)arg = &mlx5_flow_mtr_ops;
+#else
+ *(const struct rte_mtr_ops **)arg = &mlx5_flow_mtr_ops;
+#endif
return 0;
}
@@ -2877,7 +2890,6 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
struct mlx5_flow_meter_profile *fmp;
struct mlx5_legacy_flow_meter *legacy_fm;
struct mlx5_flow_meter_info *fm;
- struct mlx5_flow_meter_policy *policy;
struct mlx5_flow_meter_sub_policy *sub_policy;
void *tmp;
uint32_t i, mtr_idx, policy_idx;
@@ -2945,15 +2957,20 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
mlx5_l3t_destroy(priv->policy_idx_tbl);
priv->policy_idx_tbl = NULL;
}
+#if defined(HAVE_MLX5_HWS_SUPPORT)
if (priv->mtr_policy_arr) {
+ struct mlx5_flow_meter_policy *policy;
+
for (i = 0; i < priv->mtr_config.nb_meter_policies; i++) {
policy = mlx5_flow_meter_policy_find(dev, i,
&policy_idx);
- if (policy->initialized)
+ if (policy->initialized) {
mlx5_flow_meter_policy_hws_delete(dev, i,
error);
+ }
}
}
+#endif
if (priv->mtr_profile_tbl) {
MLX5_L3T_FOREACH(priv->mtr_profile_tbl, i, entry) {
fmp = entry;
@@ -2967,14 +2984,17 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
mlx5_l3t_destroy(priv->mtr_profile_tbl);
priv->mtr_profile_tbl = NULL;
}
+#if defined(HAVE_MLX5_HWS_SUPPORT)
if (priv->mtr_profile_arr) {
for (i = 0; i < priv->mtr_config.nb_meter_profiles; i++) {
fmp = mlx5_flow_meter_profile_find(priv, i);
- if (fmp->initialized)
+ if (fmp->initialized) {
mlx5_flow_meter_profile_hws_delete(dev, i,
error);
+ }
}
}
+#endif
/* Delete default policy table. */
mlx5_flow_destroy_def_policy(dev);
if (priv->sh->refcnt == 1)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.817311049 +0800
+++ 0092-net-mlx5-fix-HWS-meter-actions-availability.patch 2024-04-13 20:43:05.047753866 +0800
@@ -1 +1 @@
-From 7576c32eefd3b8444cff7f946361fb5c074634a7 Mon Sep 17 00:00:00 2001
+From 50eb03f8d3302810ac2d966cff17dd1be6549bd7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7576c32eefd3b8444cff7f946361fb5c074634a7 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index c0578ce6e9..57de95b4b9 100644
+index 7cbf772ea4..beeb868c8c 100644
@@ -22 +24 @@
-@@ -896,6 +896,7 @@ mlx5_flow_meter_profile_get(struct rte_eth_dev *dev,
+@@ -618,6 +618,7 @@ mlx5_flow_meter_profile_get(struct rte_eth_dev *dev,
@@ -30 +32 @@
-@@ -981,6 +982,7 @@ mlx5_flow_meter_profile_hws_delete(struct rte_eth_dev *dev,
+@@ -697,6 +698,7 @@ mlx5_flow_meter_profile_hws_delete(struct rte_eth_dev *dev,
@@ -38 +40 @@
-@@ -1123,6 +1125,7 @@ mlx5_flow_meter_policy_validate(struct rte_eth_dev *dev,
+@@ -839,6 +841,7 @@ mlx5_flow_meter_policy_validate(struct rte_eth_dev *dev,
@@ -46 +48 @@
-@@ -1159,6 +1162,7 @@ mlx5_flow_meter_policy_hws_validate(struct rte_eth_dev *dev,
+@@ -875,6 +878,7 @@ mlx5_flow_meter_policy_hws_validate(struct rte_eth_dev *dev,
@@ -54 +56 @@
-@@ -1485,6 +1489,7 @@ mlx5_flow_meter_policy_get(struct rte_eth_dev *dev,
+@@ -1201,6 +1205,7 @@ mlx5_flow_meter_policy_get(struct rte_eth_dev *dev,
@@ -62 +64 @@
-@@ -1807,7 +1812,7 @@ policy_add_err:
+@@ -1523,7 +1528,7 @@ policy_add_err:
@@ -71 +73 @@
-@@ -2177,6 +2182,7 @@ error:
+@@ -1893,6 +1898,7 @@ error:
@@ -79 +81 @@
-@@ -2260,6 +2266,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
+@@ -1976,6 +1982,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
@@ -87 +89 @@
-@@ -2744,6 +2751,7 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = {
+@@ -2460,6 +2467,7 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = {
@@ -95 +97 @@
-@@ -2762,6 +2770,7 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
+@@ -2478,6 +2486,7 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
@@ -103 +105 @@
-@@ -2777,12 +2786,16 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
+@@ -2493,12 +2502,16 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = {
@@ -120 +122 @@
-@@ -3161,7 +3174,6 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
+@@ -2877,7 +2890,6 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
@@ -128 +130 @@
-@@ -3229,15 +3241,20 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
+@@ -2945,15 +2957,20 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
@@ -150 +152 @@
-@@ -3251,14 +3268,17 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
+@@ -2967,14 +2984,17 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix sync meter processing in HWS' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (90 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix HWS meter actions availability' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix indirect action async job initialization' " Xueming Li
` (31 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7192f0ed8230f93e80915d0e37198d710952be9f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7192f0ed8230f93e80915d0e37198d710952be9f Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 7 Mar 2024 12:19:09 +0200
Subject: [PATCH] net/mlx5: fix sync meter processing in HWS
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4359d9d1f76b9ac35e548fe958b59ae4e1cbf11b ]
Synchronous calls for meter ASO try to pull pending completions
from CQ, submit WR and return to caller. That avoids delays between
WR post and HW response.
If the template API was activated, PMD will use control queue for
sync operations.
PMD has different formats for the `user_data` context in sync and
async meter ASO calls.
PMD port destruction procedure submits async operations to the port
control queue and polls the queue CQs to clean HW responses.
Port destruction can pull a meter ASO completion from control CQ.
Such completion has sync format, but was processed by async handler.
The patch implements sync meter ASO interface with async calls
in the template API environment.
Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 35 +++++-
drivers/net/mlx5/mlx5_flow_aso.c | 178 ++++++++++++++++++-----------
drivers/net/mlx5/mlx5_flow_hw.c | 98 ++++++++--------
drivers/net/mlx5/mlx5_flow_meter.c | 27 +++--
4 files changed, 215 insertions(+), 123 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 153374802a..e2c6fe0d00 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1997,6 +1997,30 @@ enum dr_dump_rec_type {
DR_DUMP_REC_TYPE_PMD_COUNTER = 4430,
};
+#if defined(HAVE_MLX5_HWS_SUPPORT)
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue)
+{
+ MLX5_ASSERT(priv->hw_q[queue].job_idx <= priv->hw_q[queue].size);
+ return priv->hw_q[queue].job_idx ?
+ priv->hw_q[queue].job[--priv->hw_q[queue].job_idx] : NULL;
+}
+
+static __rte_always_inline void
+flow_hw_job_put(struct mlx5_priv *priv, struct mlx5_hw_q_job *job, uint32_t queue)
+{
+ MLX5_ASSERT(priv->hw_q[queue].job_idx < priv->hw_q[queue].size);
+ priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job;
+}
+
+struct mlx5_hw_q_job *
+mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ void *user_data, void *query_data,
+ enum mlx5_hw_job_type type,
+ struct rte_flow_error *error);
+#endif
+
/**
* Indicates whether HW objects operations can be created by DevX.
*
@@ -2403,11 +2427,12 @@ int mlx5_aso_flow_hit_queue_poll_start(struct mlx5_dev_ctx_shared *sh);
int mlx5_aso_flow_hit_queue_poll_stop(struct mlx5_dev_ctx_shared *sh);
void mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh,
enum mlx5_access_aso_opc_mod aso_opc_mod);
-int mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
- struct mlx5_aso_mtr *mtr, struct mlx5_mtr_bulk *bulk,
- void *user_data, bool push);
-int mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
- struct mlx5_aso_mtr *mtr);
+int mlx5_aso_meter_update_by_wqe(struct mlx5_priv *priv, uint32_t queue,
+ struct mlx5_aso_mtr *mtr,
+ struct mlx5_mtr_bulk *bulk,
+ struct mlx5_hw_q_job *job, bool push);
+int mlx5_aso_mtr_wait(struct mlx5_priv *priv,
+ struct mlx5_aso_mtr *mtr, bool is_tmpl_api);
int mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
struct mlx5_aso_ct_action *ct,
const struct rte_flow_action_conntrack *profile,
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index f311443472..ab9eb21e01 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -792,7 +792,7 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh,
struct mlx5_aso_mtr *aso_mtr,
struct mlx5_mtr_bulk *bulk,
bool need_lock,
- void *user_data,
+ struct mlx5_hw_q_job *job,
bool push)
{
volatile struct mlx5_aso_wqe *wqe = NULL;
@@ -819,7 +819,7 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh,
rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]);
/* Fill next WQE. */
fm = &aso_mtr->fm;
- sq->elts[sq->head & mask].mtr = user_data ? user_data : aso_mtr;
+ sq->elts[sq->head & mask].user_data = job ? job : (void *)aso_mtr;
if (aso_mtr->type == ASO_METER_INDIRECT) {
if (likely(sh->config.dv_flow_en == 2))
pool = aso_mtr->pool;
@@ -897,24 +897,6 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh,
return 1;
}
-static void
-mlx5_aso_mtrs_status_update(struct mlx5_aso_sq *sq, uint16_t aso_mtrs_nums)
-{
- uint16_t size = 1 << sq->log_desc_n;
- uint16_t mask = size - 1;
- uint16_t i;
- struct mlx5_aso_mtr *aso_mtr = NULL;
- uint8_t exp_state = ASO_METER_WAIT;
-
- for (i = 0; i < aso_mtrs_nums; ++i) {
- aso_mtr = sq->elts[(sq->tail + i) & mask].mtr;
- MLX5_ASSERT(aso_mtr);
- (void)__atomic_compare_exchange_n(&aso_mtr->state,
- &exp_state, ASO_METER_READY,
- false, __ATOMIC_RELAXED, __ATOMIC_RELAXED);
- }
-}
-
static void
mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq, bool need_lock)
{
@@ -925,7 +907,7 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq, bool need_lock)
uint32_t idx;
uint32_t next_idx = cq->cq_ci & mask;
uint16_t max;
- uint16_t n = 0;
+ uint16_t i, n = 0;
int ret;
if (need_lock)
@@ -957,7 +939,19 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq, bool need_lock)
cq->cq_ci++;
} while (1);
if (likely(n)) {
- mlx5_aso_mtrs_status_update(sq, n);
+ uint8_t exp_state = ASO_METER_WAIT;
+ struct mlx5_aso_mtr *aso_mtr;
+ __rte_unused bool verdict;
+
+ for (i = 0; i < n; ++i) {
+ aso_mtr = sq->elts[(sq->tail + i) & mask].mtr;
+ MLX5_ASSERT(aso_mtr);
+ verdict = __atomic_compare_exchange_n(&aso_mtr->state,
+ &exp_state, ASO_METER_READY,
+ false, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED);
+ MLX5_ASSERT(verdict);
+ }
sq->tail += n;
rte_io_wmb();
cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
@@ -966,6 +960,82 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq, bool need_lock)
rte_spinlock_unlock(&sq->sqsl);
}
+static __rte_always_inline struct mlx5_aso_sq *
+mlx5_aso_mtr_select_sq(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
+ struct mlx5_aso_mtr *mtr, bool *need_lock)
+{
+ struct mlx5_aso_sq *sq;
+
+ if (likely(sh->config.dv_flow_en == 2) &&
+ mtr->type == ASO_METER_INDIRECT) {
+ if (queue == MLX5_HW_INV_QUEUE) {
+ sq = &mtr->pool->sq[mtr->pool->nb_sq - 1];
+ *need_lock = true;
+ } else {
+ sq = &mtr->pool->sq[queue];
+ *need_lock = false;
+ }
+ } else {
+ sq = &sh->mtrmng->pools_mng.sq;
+ *need_lock = true;
+ }
+ return sq;
+}
+
+#if defined(HAVE_MLX5_HWS_SUPPORT)
+static void
+mlx5_aso_poll_cq_mtr_hws(struct mlx5_priv *priv, struct mlx5_aso_sq *sq)
+{
+#define MLX5_HWS_MTR_CMPL_NUM 4
+
+ int i, ret;
+ struct mlx5_aso_mtr *mtr;
+ uint8_t exp_state = ASO_METER_WAIT;
+ struct rte_flow_op_result res[MLX5_HWS_MTR_CMPL_NUM];
+ __rte_unused bool verdict;
+
+ rte_spinlock_lock(&sq->sqsl);
+repeat:
+ ret = mlx5_aso_pull_completion(sq, res, MLX5_HWS_MTR_CMPL_NUM);
+ if (ret) {
+ for (i = 0; i < ret; i++) {
+ struct mlx5_hw_q_job *job = res[i].user_data;
+
+ MLX5_ASSERT(job);
+ mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool,
+ MLX5_INDIRECT_ACTION_IDX_GET(job->action));
+ MLX5_ASSERT(mtr);
+ verdict = __atomic_compare_exchange_n(&mtr->state,
+ &exp_state, ASO_METER_READY,
+ false, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED);
+ MLX5_ASSERT(verdict);
+ flow_hw_job_put(priv, job, CTRL_QUEUE_ID(priv));
+ }
+ if (ret == MLX5_HWS_MTR_CMPL_NUM)
+ goto repeat;
+ }
+ rte_spinlock_unlock(&sq->sqsl);
+
+#undef MLX5_HWS_MTR_CMPL_NUM
+}
+#else
+static void
+mlx5_aso_poll_cq_mtr_hws(__rte_unused struct mlx5_priv *priv, __rte_unused struct mlx5_aso_sq *sq)
+{
+ MLX5_ASSERT(false);
+}
+#endif
+
+static void
+mlx5_aso_poll_cq_mtr_sws(__rte_unused struct mlx5_priv *priv,
+ struct mlx5_aso_sq *sq)
+{
+ mlx5_aso_mtr_completion_handle(sq, true);
+}
+
+typedef void (*poll_cq_t)(struct mlx5_priv *, struct mlx5_aso_sq *);
+
/**
* Update meter parameter by send WQE.
*
@@ -980,39 +1050,29 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq, bool need_lock)
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
int
-mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
- struct mlx5_aso_mtr *mtr,
- struct mlx5_mtr_bulk *bulk,
- void *user_data,
- bool push)
+mlx5_aso_meter_update_by_wqe(struct mlx5_priv *priv, uint32_t queue,
+ struct mlx5_aso_mtr *mtr,
+ struct mlx5_mtr_bulk *bulk,
+ struct mlx5_hw_q_job *job, bool push)
{
- struct mlx5_aso_sq *sq;
- uint32_t poll_wqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES;
bool need_lock;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_aso_sq *sq =
+ mlx5_aso_mtr_select_sq(sh, queue, mtr, &need_lock);
+ uint32_t poll_wqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES;
+ poll_cq_t poll_mtr_cq =
+ job ? mlx5_aso_poll_cq_mtr_hws : mlx5_aso_poll_cq_mtr_sws;
int ret;
- if (likely(sh->config.dv_flow_en == 2) &&
- mtr->type == ASO_METER_INDIRECT) {
- if (queue == MLX5_HW_INV_QUEUE) {
- sq = &mtr->pool->sq[mtr->pool->nb_sq - 1];
- need_lock = true;
- } else {
- sq = &mtr->pool->sq[queue];
- need_lock = false;
- }
- } else {
- sq = &sh->mtrmng->pools_mng.sq;
- need_lock = true;
- }
if (queue != MLX5_HW_INV_QUEUE) {
ret = mlx5_aso_mtr_sq_enqueue_single(sh, sq, mtr, bulk,
- need_lock, user_data, push);
+ need_lock, job, push);
return ret > 0 ? 0 : -1;
}
do {
- mlx5_aso_mtr_completion_handle(sq, need_lock);
+ poll_mtr_cq(priv, sq);
if (mlx5_aso_mtr_sq_enqueue_single(sh, sq, mtr, bulk,
- need_lock, NULL, true))
+ need_lock, job, true))
return 0;
/* Waiting for wqe resource. */
rte_delay_us_sleep(MLX5_ASO_WQE_CQE_RESPONSE_DELAY);
@@ -1036,32 +1096,22 @@ mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
int
-mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, uint32_t queue,
- struct mlx5_aso_mtr *mtr)
+mlx5_aso_mtr_wait(struct mlx5_priv *priv,
+ struct mlx5_aso_mtr *mtr, bool is_tmpl_api)
{
+ bool need_lock;
struct mlx5_aso_sq *sq;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
uint32_t poll_cqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES;
- uint8_t state;
- bool need_lock;
+ uint8_t state = __atomic_load_n(&mtr->state, __ATOMIC_RELAXED);
+ poll_cq_t poll_mtr_cq =
+ is_tmpl_api ? mlx5_aso_poll_cq_mtr_hws : mlx5_aso_poll_cq_mtr_sws;
- if (likely(sh->config.dv_flow_en == 2) &&
- mtr->type == ASO_METER_INDIRECT) {
- if (queue == MLX5_HW_INV_QUEUE) {
- sq = &mtr->pool->sq[mtr->pool->nb_sq - 1];
- need_lock = true;
- } else {
- sq = &mtr->pool->sq[queue];
- need_lock = false;
- }
- } else {
- sq = &sh->mtrmng->pools_mng.sq;
- need_lock = true;
- }
- state = __atomic_load_n(&mtr->state, __ATOMIC_RELAXED);
if (state == ASO_METER_READY || state == ASO_METER_WAIT_ASYNC)
return 0;
+ sq = mlx5_aso_mtr_select_sq(sh, MLX5_HW_INV_QUEUE, mtr, &need_lock);
do {
- mlx5_aso_mtr_completion_handle(sq, need_lock);
+ poll_mtr_cq(priv, sq);
if (__atomic_load_n(&mtr->state, __ATOMIC_RELAXED) ==
ASO_METER_READY)
return 0;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 6aaf3aee2a..f43ffb1d4e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -104,6 +104,12 @@ struct mlx5_tbl_multi_pattern_ctx {
#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},}
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ void *user_data, void *query_data,
+ enum mlx5_hw_job_type type,
+ struct rte_flow_error *error);
static int
mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
struct rte_flow_template_table *tbl,
@@ -277,21 +283,6 @@ static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
.hdr.ether_type = 0,
};
-static __rte_always_inline struct mlx5_hw_q_job *
-flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue)
-{
- MLX5_ASSERT(priv->hw_q[queue].job_idx <= priv->hw_q[queue].size);
- return priv->hw_q[queue].job_idx ?
- priv->hw_q[queue].job[--priv->hw_q[queue].job_idx] : NULL;
-}
-
-static __rte_always_inline void
-flow_hw_job_put(struct mlx5_priv *priv, struct mlx5_hw_q_job *job, uint32_t queue)
-{
- MLX5_ASSERT(priv->hw_q[queue].job_idx < priv->hw_q[queue].size);
- priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job;
-}
-
static inline enum mlx5dr_matcher_insert_mode
flow_hw_matcher_insert_mode_get(enum rte_flow_table_insertion_type insert_type)
{
@@ -1458,7 +1449,7 @@ flow_hw_meter_compile(struct rte_eth_dev *dev,
acts->rule_acts[jump_pos].action = (!!group) ?
acts->jump->hws_action :
acts->jump->root_action;
- if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr))
+ if (mlx5_aso_mtr_wait(priv, aso_mtr, true))
return -ENOMEM;
return 0;
}
@@ -1535,7 +1526,7 @@ static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions)
static __rte_always_inline struct mlx5_aso_mtr *
flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
const struct rte_flow_action *action,
- void *user_data, bool push)
+ struct mlx5_hw_q_job *job, bool push)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
@@ -1543,6 +1534,8 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
struct mlx5_aso_mtr *aso_mtr;
struct mlx5_flow_meter_info *fm;
uint32_t mtr_id;
+ uintptr_t handle = (uintptr_t)MLX5_INDIRECT_ACTION_TYPE_METER_MARK <<
+ MLX5_INDIRECT_ACTION_TYPE_OFFSET;
if (meter_mark->profile == NULL)
return NULL;
@@ -1561,15 +1554,16 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
ASO_METER_WAIT : ASO_METER_WAIT_ASYNC;
aso_mtr->offset = mtr_id - 1;
aso_mtr->init_color = fm->color_aware ? RTE_COLORS : RTE_COLOR_GREEN;
+ job->action = (void *)(handle | mtr_id);
/* Update ASO flow meter by wqe. */
- if (mlx5_aso_meter_update_by_wqe(priv->sh, queue, aso_mtr,
- &priv->mtr_bulk, user_data, push)) {
+ if (mlx5_aso_meter_update_by_wqe(priv, queue, aso_mtr,
+ &priv->mtr_bulk, job, push)) {
mlx5_ipool_free(pool->idx_pool, mtr_id);
return NULL;
}
/* Wait for ASO object completion. */
if (queue == MLX5_HW_INV_QUEUE &&
- mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) {
+ mlx5_aso_mtr_wait(priv, aso_mtr, true)) {
mlx5_ipool_free(pool->idx_pool, mtr_id);
return NULL;
}
@@ -1587,10 +1581,17 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
struct mlx5_aso_mtr *aso_mtr;
+ struct mlx5_hw_q_job *job =
+ flow_hw_action_job_init(priv, queue, NULL, NULL, NULL,
+ MLX5_HW_Q_JOB_TYPE_CREATE, NULL);
- aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, NULL, true);
- if (!aso_mtr)
+ if (!job)
+ return -1;
+ aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, true);
+ if (!aso_mtr) {
+ flow_hw_job_put(priv, job, queue);
return -1;
+ }
/* Compile METER_MARK action */
acts[aso_mtr_pos].action = pool->action;
@@ -3099,7 +3100,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
jump->root_action;
job->flow->jump = jump;
job->flow->fate_type = MLX5_FLOW_FATE_JUMP;
- if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr))
+ if (mlx5_aso_mtr_wait(priv, aso_mtr, true))
return -1;
break;
case RTE_FLOW_ACTION_TYPE_AGE:
@@ -3777,13 +3778,6 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
job->query.hw);
aso_ct->state = ASO_CONNTRACK_READY;
}
- } else {
- /*
- * rte_flow_op_result::user data can point to
- * struct mlx5_aso_mtr object as well
- */
- if (queue != CTRL_QUEUE_ID(priv))
- MLX5_ASSERT(false);
}
}
@@ -10067,7 +10061,8 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
{
struct mlx5_hw_q_job *job;
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
+ if (queue == MLX5_HW_INV_QUEUE)
+ queue = CTRL_QUEUE_ID(priv);
job = flow_hw_job_get(priv, queue);
if (!job) {
rte_flow_error_set(error, ENOMEM,
@@ -10082,6 +10077,17 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
return job;
}
+struct mlx5_hw_q_job *
+mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ void *user_data, void *query_data,
+ enum mlx5_hw_job_type type,
+ struct rte_flow_error *error)
+{
+ return flow_hw_action_job_init(priv, queue, handle, user_data, query_data,
+ type, error);
+}
+
static __rte_always_inline void
flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue,
struct mlx5_hw_q_job *job,
@@ -10141,12 +10147,12 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
const struct rte_flow_action_age *age;
struct mlx5_aso_mtr *aso_mtr;
cnt_id_t cnt_id;
- uint32_t mtr_id;
uint32_t age_idx;
bool push = flow_hw_action_push(attr);
bool aso = false;
+ bool force_job = action->type == RTE_FLOW_ACTION_TYPE_METER_MARK;
- if (attr) {
+ if (attr || force_job) {
job = flow_hw_action_job_init(priv, queue, NULL, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
error);
@@ -10201,9 +10207,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, push);
if (!aso_mtr)
break;
- mtr_id = (MLX5_INDIRECT_ACTION_TYPE_METER_MARK <<
- MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (aso_mtr->fm.meter_id);
- handle = (struct rte_flow_action_handle *)(uintptr_t)mtr_id;
+ handle = (void *)(uintptr_t)job->action;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
handle = flow_dv_action_create(dev, conf, action, error);
@@ -10218,7 +10222,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
NULL, "action type not supported");
break;
}
- if (job) {
+ if (job && !force_job) {
job->action = handle;
job->indirect_type = MLX5_HW_INDIRECT_TYPE_LEGACY;
flow_hw_action_finalize(dev, queue, job, push, aso,
@@ -10251,15 +10255,17 @@ mlx5_flow_update_meter_mark(struct rte_eth_dev *dev, uint32_t queue,
fm->color_aware = meter_mark->color_mode;
if (upd_meter_mark->state_valid)
fm->is_enable = meter_mark->state;
+ aso_mtr->state = (queue == MLX5_HW_INV_QUEUE) ?
+ ASO_METER_WAIT : ASO_METER_WAIT_ASYNC;
/* Update ASO flow meter by wqe. */
- if (mlx5_aso_meter_update_by_wqe(priv->sh, queue,
+ if (mlx5_aso_meter_update_by_wqe(priv, queue,
aso_mtr, &priv->mtr_bulk, job, push))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "Unable to update ASO meter WQE");
/* Wait for ASO object completion. */
if (queue == MLX5_HW_INV_QUEUE &&
- mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr))
+ mlx5_aso_mtr_wait(priv, aso_mtr, true))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "Unable to wait for ASO meter CQE");
@@ -10305,8 +10311,9 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
int ret = 0;
bool push = flow_hw_action_push(attr);
bool aso = false;
+ bool force_job = type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK;
- if (attr) {
+ if (attr || force_job) {
job = flow_hw_action_job_init(priv, queue, handle, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_UPDATE,
error);
@@ -10343,7 +10350,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job)
+ if (job && !force_job)
flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -10386,8 +10393,9 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
bool push = flow_hw_action_push(attr);
bool aso = false;
int ret = 0;
+ bool force_job = type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK;
- if (attr) {
+ if (attr || force_job) {
job = flow_hw_action_job_init(priv, queue, handle, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
error);
@@ -10423,7 +10431,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
fm = &aso_mtr->fm;
fm->is_enable = 0;
/* Update ASO flow meter by wqe. */
- if (mlx5_aso_meter_update_by_wqe(priv->sh, queue, aso_mtr,
+ if (mlx5_aso_meter_update_by_wqe(priv, queue, aso_mtr,
&priv->mtr_bulk, job, push)) {
ret = -EINVAL;
rte_flow_error_set(error, EINVAL,
@@ -10433,7 +10441,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
}
/* Wait for ASO object completion. */
if (queue == MLX5_HW_INV_QUEUE &&
- mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) {
+ mlx5_aso_mtr_wait(priv, aso_mtr, true)) {
ret = -EINVAL;
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -10457,7 +10465,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job)
+ if (job && !force_job)
flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index beeb868c8c..249dc73691 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -1613,12 +1613,12 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
if (sh->meter_aso_en) {
fm->is_enable = !!is_enable;
aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm);
- ret = mlx5_aso_meter_update_by_wqe(sh, MLX5_HW_INV_QUEUE,
+ ret = mlx5_aso_meter_update_by_wqe(priv, MLX5_HW_INV_QUEUE,
aso_mtr, &priv->mtr_bulk,
NULL, true);
if (ret)
return ret;
- ret = mlx5_aso_mtr_wait(sh, MLX5_HW_INV_QUEUE, aso_mtr);
+ ret = mlx5_aso_mtr_wait(priv, aso_mtr, false);
if (ret)
return ret;
} else {
@@ -1864,7 +1864,7 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id,
/* If ASO meter supported, update ASO flow meter by wqe. */
if (priv->sh->meter_aso_en) {
aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm);
- ret = mlx5_aso_meter_update_by_wqe(priv->sh, MLX5_HW_INV_QUEUE,
+ ret = mlx5_aso_meter_update_by_wqe(priv, MLX5_HW_INV_QUEUE,
aso_mtr, &priv->mtr_bulk, NULL, true);
if (ret)
goto error;
@@ -1926,6 +1926,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
struct mlx5_flow_meter_info *fm;
struct mlx5_flow_meter_policy *policy = NULL;
struct mlx5_aso_mtr *aso_mtr;
+ struct mlx5_hw_q_job *job;
int ret;
if (!priv->mtr_profile_arr ||
@@ -1971,12 +1972,20 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
fm->shared = !!shared;
fm->initialized = 1;
/* Update ASO flow meter by wqe. */
- ret = mlx5_aso_meter_update_by_wqe(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr,
- &priv->mtr_bulk, NULL, true);
- if (ret)
+ job = mlx5_flow_action_job_init(priv, MLX5_HW_INV_QUEUE, NULL, NULL,
+ NULL, MLX5_HW_Q_JOB_TYPE_CREATE, NULL);
+ if (!job)
+ return -rte_mtr_error_set(error, ENOMEM,
+ RTE_MTR_ERROR_TYPE_MTR_ID,
+ NULL, "No job context.");
+ ret = mlx5_aso_meter_update_by_wqe(priv, MLX5_HW_INV_QUEUE, aso_mtr,
+ &priv->mtr_bulk, job, true);
+ if (ret) {
+ flow_hw_job_put(priv, job, MLX5_HW_INV_QUEUE);
return -rte_mtr_error_set(error, ENOTSUP,
- RTE_MTR_ERROR_TYPE_UNSPECIFIED,
- NULL, "Failed to create devx meter.");
+ RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to create devx meter.");
+ }
fm->active_state = params->meter_enable;
__atomic_fetch_add(&fm->profile->ref_cnt, 1, __ATOMIC_RELAXED);
__atomic_fetch_add(&policy->ref_cnt, 1, __ATOMIC_RELAXED);
@@ -2627,7 +2636,7 @@ mlx5_flow_meter_attach(struct mlx5_priv *priv,
struct mlx5_aso_mtr *aso_mtr;
aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm);
- if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) {
+ if (mlx5_aso_mtr_wait(priv, aso_mtr, false)) {
return rte_flow_error_set(error, ENOENT,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.850160106 +0800
+++ 0093-net-mlx5-fix-sync-meter-processing-in-HWS.patch 2024-04-13 20:43:05.057753853 +0800
@@ -1 +1 @@
-From 4359d9d1f76b9ac35e548fe958b59ae4e1cbf11b Mon Sep 17 00:00:00 2001
+From 7192f0ed8230f93e80915d0e37198d710952be9f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4359d9d1f76b9ac35e548fe958b59ae4e1cbf11b ]
@@ -24 +26,0 @@
-Cc: stable@dpdk.org
@@ -31 +33 @@
- drivers/net/mlx5/mlx5_flow_hw.c | 99 ++++++++--------
+ drivers/net/mlx5/mlx5_flow_hw.c | 98 ++++++++--------
@@ -33 +35 @@
- 4 files changed, 216 insertions(+), 123 deletions(-)
+ 4 files changed, 215 insertions(+), 123 deletions(-)
@@ -36 +38 @@
-index 2fb3bb65cc..6ff8f322e0 100644
+index 153374802a..e2c6fe0d00 100644
@@ -39 +41 @@
-@@ -2033,6 +2033,30 @@ enum dr_dump_rec_type {
+@@ -1997,6 +1997,30 @@ enum dr_dump_rec_type {
@@ -70 +72 @@
-@@ -2443,11 +2467,12 @@ int mlx5_aso_flow_hit_queue_poll_start(struct mlx5_dev_ctx_shared *sh);
+@@ -2403,11 +2427,12 @@ int mlx5_aso_flow_hit_queue_poll_start(struct mlx5_dev_ctx_shared *sh);
@@ -344 +346 @@
-index c1b09c9c03..8f004b5435 100644
+index 6aaf3aee2a..f43ffb1d4e 100644
@@ -347,3 +349,3 @@
-@@ -183,6 +183,12 @@ mlx5_flow_hw_aux_get_mtr_id(struct rte_flow_hw *flow, struct rte_flow_hw_aux *au
- return aux->orig.mtr_id;
- }
+@@ -104,6 +104,12 @@ struct mlx5_tbl_multi_pattern_ctx {
+
+ #define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},}
@@ -360,3 +362,3 @@
-@@ -384,21 +390,6 @@ flow_hw_q_dec_flow_ops(struct mlx5_priv *priv, uint32_t queue)
- q->ongoing_flow_ops--;
- }
+@@ -277,21 +283,6 @@ static const struct rte_flow_item_eth ctrl_rx_eth_bcast_spec = {
+ .hdr.ether_type = 0,
+ };
@@ -382 +384 @@
-@@ -1560,7 +1551,7 @@ flow_hw_meter_compile(struct rte_eth_dev *dev,
+@@ -1458,7 +1449,7 @@ flow_hw_meter_compile(struct rte_eth_dev *dev,
@@ -391 +393 @@
-@@ -1637,7 +1628,7 @@ static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions)
+@@ -1535,7 +1526,7 @@ static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions)
@@ -395,3 +397,2 @@
-- void *user_data, bool push,
-+ struct mlx5_hw_q_job *job, bool push,
- struct rte_flow_error *error)
+- void *user_data, bool push)
++ struct mlx5_hw_q_job *job, bool push)
@@ -400 +401,2 @@
-@@ -1646,6 +1637,8 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
+@@ -1543,6 +1534,8 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
@@ -407,3 +409,3 @@
- if (priv->shared_host) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-@@ -1669,15 +1662,16 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ if (meter_mark->profile == NULL)
+ return NULL;
+@@ -1561,15 +1554,16 @@ flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, uint32_t queue,
@@ -429 +431 @@
-@@ -1696,10 +1690,18 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
+@@ -1587,10 +1581,17 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
@@ -437 +439 @@
-- aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, NULL, true, error);
+- aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, NULL, true);
@@ -441,2 +443 @@
-+ aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job,
-+ true, error);
++ aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, true);
@@ -450 +451 @@
-@@ -3275,7 +3277,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
+@@ -3099,7 +3100,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
@@ -452,2 +453,2 @@
- flow->jump = jump;
- flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_FATE_JUMP;
+ job->flow->jump = jump;
+ job->flow->fate_type = MLX5_FLOW_FATE_JUMP;
@@ -459 +460 @@
-@@ -4009,13 +4011,6 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
+@@ -3777,13 +3778,6 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
@@ -473 +474 @@
-@@ -11007,7 +11002,8 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+@@ -10067,7 +10061,8 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
@@ -483 +484 @@
-@@ -11022,6 +11018,17 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+@@ -10082,6 +10077,17 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
@@ -501 +502 @@
-@@ -11081,12 +11088,12 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10141,12 +10147,12 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
@@ -516,2 +517,2 @@
-@@ -11141,9 +11148,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
- aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, push, error);
+@@ -10201,9 +10207,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+ aso_mtr = flow_hw_meter_mark_alloc(dev, queue, action, job, push);
@@ -527 +528 @@
-@@ -11158,7 +11163,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10218,7 +10222,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
@@ -536 +537 @@
-@@ -11191,15 +11196,17 @@ mlx5_flow_update_meter_mark(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10251,15 +10255,17 @@ mlx5_flow_update_meter_mark(struct rte_eth_dev *dev, uint32_t queue,
@@ -556 +557 @@
-@@ -11245,8 +11252,9 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10305,8 +10311,9 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
@@ -567 +568 @@
-@@ -11283,7 +11291,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10343,7 +10350,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
@@ -576 +577 @@
-@@ -11326,8 +11334,9 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10386,8 +10393,9 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
@@ -587 +588 @@
-@@ -11363,7 +11372,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10423,7 +10431,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
@@ -596 +597 @@
-@@ -11373,7 +11382,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10433,7 +10441,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
@@ -605 +606 @@
-@@ -11397,7 +11406,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10457,7 +10465,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
@@ -615 +616 @@
-index 57de95b4b9..4045c4c249 100644
+index beeb868c8c..249dc73691 100644
@@ -618 +619 @@
-@@ -1897,12 +1897,12 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
+@@ -1613,12 +1613,12 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
@@ -633 +634 @@
-@@ -2148,7 +2148,7 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id,
+@@ -1864,7 +1864,7 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id,
@@ -642 +643 @@
-@@ -2210,6 +2210,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
+@@ -1926,6 +1926,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
@@ -650 +651 @@
-@@ -2255,12 +2256,20 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
+@@ -1971,12 +1972,20 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
@@ -676 +677 @@
-@@ -2911,7 +2920,7 @@ mlx5_flow_meter_attach(struct mlx5_priv *priv,
+@@ -2627,7 +2636,7 @@ mlx5_flow_meter_attach(struct mlx5_priv *priv,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix indirect action async job initialization' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (91 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix sync meter processing in HWS' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix non-masked indirect list meter translation' " Xueming Li
` (30 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1994df02c988a2f1d70cfd192ecd2098edfc6713
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1994df02c988a2f1d70cfd192ecd2098edfc6713 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 7 Mar 2024 12:19:10 +0200
Subject: [PATCH] net/mlx5: fix indirect action async job initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1a8b80329748033eb3bb9ed7433e0aef1bbcd838 ]
MLX5 PMD driver supports 2 types of indirect actions:
legacy INDIRECT and INDIRECT_LIST.
PMD has different handlers for each of indirection actions types.
Therefore PMD marks async `job::indirect_type` with relevant value.
PMD set the type only during indirect action creation.
Legacy INDIRECT query could have get a job object used previously by
INDIRECT_LIST action. In that case such job object was handled as
INDIRECT_LIST because the `job::indirect_type` was not re-assigned.
The patch sets `job::indirect_type` during the job initialization
according to operation type.
Fixes: 59155721936e ("net/mlx5: fix indirect flow completion processing")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f43ffb1d4e..6d0f1beeec 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -109,6 +109,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
const struct rte_flow_action_handle *handle,
void *user_data, void *query_data,
enum mlx5_hw_job_type type,
+ enum mlx5_hw_indirect_type indirect_type,
struct rte_flow_error *error);
static int
mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
@@ -1583,7 +1584,8 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
struct mlx5_aso_mtr *aso_mtr;
struct mlx5_hw_q_job *job =
flow_hw_action_job_init(priv, queue, NULL, NULL, NULL,
- MLX5_HW_Q_JOB_TYPE_CREATE, NULL);
+ MLX5_HW_Q_JOB_TYPE_CREATE,
+ MLX5_HW_INDIRECT_TYPE_LEGACY, NULL);
if (!job)
return -1;
@@ -10057,6 +10059,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
const struct rte_flow_action_handle *handle,
void *user_data, void *query_data,
enum mlx5_hw_job_type type,
+ enum mlx5_hw_indirect_type indirect_type,
struct rte_flow_error *error)
{
struct mlx5_hw_q_job *job;
@@ -10074,6 +10077,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
job->action = handle;
job->user_data = user_data;
job->query.user = query_data;
+ job->indirect_type = indirect_type;
return job;
}
@@ -10085,7 +10089,7 @@ mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue,
struct rte_flow_error *error)
{
return flow_hw_action_job_init(priv, queue, handle, user_data, query_data,
- type, error);
+ type, MLX5_HW_INDIRECT_TYPE_LEGACY, error);
}
static __rte_always_inline void
@@ -10155,7 +10159,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
if (attr || force_job) {
job = flow_hw_action_job_init(priv, queue, NULL, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
- error);
+ MLX5_HW_INDIRECT_TYPE_LEGACY, error);
if (!job)
return NULL;
}
@@ -10224,7 +10228,6 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
}
if (job && !force_job) {
job->action = handle;
- job->indirect_type = MLX5_HW_INDIRECT_TYPE_LEGACY;
flow_hw_action_finalize(dev, queue, job, push, aso,
handle != NULL);
}
@@ -10316,7 +10319,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
if (attr || force_job) {
job = flow_hw_action_job_init(priv, queue, handle, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_UPDATE,
- error);
+ MLX5_HW_INDIRECT_TYPE_LEGACY, error);
if (!job)
return -rte_errno;
}
@@ -10398,7 +10401,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
if (attr || force_job) {
job = flow_hw_action_job_init(priv, queue, handle, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
- error);
+ MLX5_HW_INDIRECT_TYPE_LEGACY, error);
if (!job)
return -rte_errno;
}
@@ -10711,7 +10714,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
if (attr) {
job = flow_hw_action_job_init(priv, queue, handle, user_data,
data, MLX5_HW_Q_JOB_TYPE_QUERY,
- error);
+ MLX5_HW_INDIRECT_TYPE_LEGACY, error);
if (!job)
return -rte_errno;
}
@@ -10765,7 +10768,7 @@ flow_hw_async_action_handle_query_update
job = flow_hw_action_job_init(priv, queue, handle, user_data,
query,
MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY,
- error);
+ MLX5_HW_INDIRECT_TYPE_LEGACY, error);
if (!job)
return -rte_errno;
}
@@ -11445,7 +11448,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
if (attr) {
job = flow_hw_action_job_init(priv, queue, NULL, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
- error);
+ MLX5_HW_INDIRECT_TYPE_LIST, error);
if (!job)
return NULL;
}
@@ -11465,7 +11468,6 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
}
if (job) {
job->action = handle;
- job->indirect_type = MLX5_HW_INDIRECT_TYPE_LIST;
flow_hw_action_finalize(dev, queue, job, push, false,
handle != NULL);
}
@@ -11510,7 +11512,7 @@ flow_hw_async_action_list_handle_destroy
if (attr) {
job = flow_hw_action_job_init(priv, queue, NULL, user_data,
NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
- error);
+ MLX5_HW_INDIRECT_TYPE_LIST, error);
if (!job)
return rte_errno;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.886808758 +0800
+++ 0094-net-mlx5-fix-indirect-action-async-job-initializatio.patch 2024-04-13 20:43:05.057753853 +0800
@@ -1 +1 @@
-From 1a8b80329748033eb3bb9ed7433e0aef1bbcd838 Mon Sep 17 00:00:00 2001
+From 1994df02c988a2f1d70cfd192ecd2098edfc6713 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1a8b80329748033eb3bb9ed7433e0aef1bbcd838 ]
@@ -20 +22,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index 8f004b5435..b9ba05f695 100644
+index f43ffb1d4e..6d0f1beeec 100644
@@ -32 +34 @@
-@@ -188,6 +188,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+@@ -109,6 +109,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
@@ -40 +42 @@
-@@ -1692,7 +1693,8 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
+@@ -1583,7 +1584,8 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
@@ -50 +52 @@
-@@ -10998,6 +11000,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+@@ -10057,6 +10059,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
@@ -58 +60 @@
-@@ -11015,6 +11018,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+@@ -10074,6 +10077,7 @@ flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
@@ -66 +68 @@
-@@ -11026,7 +11030,7 @@ mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+@@ -10085,7 +10089,7 @@ mlx5_flow_action_job_init(struct mlx5_priv *priv, uint32_t queue,
@@ -75 +77 @@
-@@ -11096,7 +11100,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10155,7 +10159,7 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
@@ -84 +86 @@
-@@ -11165,7 +11169,6 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10224,7 +10228,6 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
@@ -92 +94 @@
-@@ -11257,7 +11260,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10316,7 +10319,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
@@ -101 +103 @@
-@@ -11339,7 +11342,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10398,7 +10401,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
@@ -110 +112 @@
-@@ -11663,7 +11666,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10711,7 +10714,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
@@ -119 +121 @@
-@@ -11717,7 +11720,7 @@ flow_hw_async_action_handle_query_update
+@@ -10765,7 +10768,7 @@ flow_hw_async_action_handle_query_update
@@ -128 +130 @@
-@@ -12397,7 +12400,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -11445,7 +11448,7 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
@@ -137 +139 @@
-@@ -12417,7 +12420,6 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+@@ -11465,7 +11468,6 @@ flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
@@ -145 +147 @@
-@@ -12462,7 +12464,7 @@ flow_hw_async_action_list_handle_destroy
+@@ -11510,7 +11512,7 @@ flow_hw_async_action_list_handle_destroy
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix non-masked indirect list meter translation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (92 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix indirect action async job initialization' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'doc: update link to Windows DevX in mlx5 guide' " Xueming Li
` (29 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=af41defcf7a2252f8383ef3ca2643d6937fe7787
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From af41defcf7a2252f8383ef3ca2643d6937fe7787 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 29 Feb 2024 13:31:19 +0200
Subject: [PATCH] net/mlx5: fix non-masked indirect list meter translation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4c041e2f0595e4a4e176a6d1f1a00c30debbc1fe ]
Template table reuses DR5 action handle for non-masked indirect
actions. Flow rule must explicitly translate non-masked indirect
action and update DR5 handle with the rule indirect object.
Current PMD assumed DR5 handle of non-masked indirect action was
always NULL before the action translation.
The patch always translates non-masked indirect list meter object.
Fixes: e26f50adbf38 ("net/mlx5: support indirect list meter mark action")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 6d0f1beeec..7ed0c0ac9b 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1728,15 +1728,9 @@ flow_hw_translate_indirect_meter(struct rte_eth_dev *dev,
const struct rte_flow_indirect_update_flow_meter_mark **flow_conf =
(typeof(flow_conf))action_conf->conf;
- /*
- * Masked indirect handle set dr5 action during template table
- * translation.
- */
- if (!dr_rule->action) {
- ret = flow_dr_set_meter(priv, dr_rule, action_conf);
- if (ret)
- return ret;
- }
+ ret = flow_dr_set_meter(priv, dr_rule, action_conf);
+ if (ret)
+ return ret;
if (!act_data->shared_meter.conf_masked) {
if (flow_conf && flow_conf[0] && flow_conf[0]->init_color < RTE_COLORS)
flow_dr_mtr_flow_color(dr_rule, flow_conf[0]->init_color);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.916616519 +0800
+++ 0095-net-mlx5-fix-non-masked-indirect-list-meter-translat.patch 2024-04-13 20:43:05.067753840 +0800
@@ -1 +1 @@
-From 4c041e2f0595e4a4e176a6d1f1a00c30debbc1fe Mon Sep 17 00:00:00 2001
+From af41defcf7a2252f8383ef3ca2643d6937fe7787 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4c041e2f0595e4a4e176a6d1f1a00c30debbc1fe ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index b9ba05f695..a4e204695e 100644
+index 6d0f1beeec..7ed0c0ac9b 100644
@@ -29 +31 @@
-@@ -1838,15 +1838,9 @@ flow_hw_translate_indirect_meter(struct rte_eth_dev *dev,
+@@ -1728,15 +1728,9 @@ flow_hw_translate_indirect_meter(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: update link to Windows DevX in mlx5 guide' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (93 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix non-masked indirect list meter translation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix VLAN ID in flow modify' " Xueming Li
` (28 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Ali Alnubani; +Cc: Tal Shnaiderman, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f01fd28181f5b1e6827ff8a5837983ff6817e01c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f01fd28181f5b1e6827ff8a5837983ff6817e01c Mon Sep 17 00:00:00 2001
From: Ali Alnubani <alialnu@nvidia.com>
Date: Thu, 29 Feb 2024 18:45:26 +0200
Subject: [PATCH] doc: update link to Windows DevX in mlx5 guide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5ddc8269192ca7aeec0bf903704c0385ebbd9e87 ]
The older link no longer works.
Signed-off-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Tal Shnaiderman <talshn@nvidia.com>
---
doc/guides/platform/mlx5.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst
index 400000e284..d64599699e 100644
--- a/doc/guides/platform/mlx5.rst
+++ b/doc/guides/platform/mlx5.rst
@@ -230,7 +230,7 @@ DevX SDK Installation
The DevX SDK must be installed on the machine building the Windows PMD.
Additional information can be found at
`How to Integrate Windows DevX in Your Development Environment
-<https://docs.nvidia.com/networking/display/winof2v260/RShim+Drivers+and+Usage#RShimDriversandUsage-DevXInterface>`_.
+<https://docs.nvidia.com/networking/display/winof2v290/devx+interface>`_.
The minimal supported WinOF2 version is 2.60.
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.948421978 +0800
+++ 0096-doc-update-link-to-Windows-DevX-in-mlx5-guide.patch 2024-04-13 20:43:05.067753840 +0800
@@ -1 +1 @@
-From 5ddc8269192ca7aeec0bf903704c0385ebbd9e87 Mon Sep 17 00:00:00 2001
+From f01fd28181f5b1e6827ff8a5837983ff6817e01c Mon Sep 17 00:00:00 2001
@@ -4,0 +5 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
@@ -6 +7 @@
-The older link no longer works.
+[ upstream commit 5ddc8269192ca7aeec0bf903704c0385ebbd9e87 ]
@@ -8 +9 @@
-Cc: stable@dpdk.org
+The older link no longer works.
@@ -17 +18 @@
-index a66cf778d1..e9a1f52aca 100644
+index 400000e284..d64599699e 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix VLAN ID in flow modify' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (94 preceding siblings ...)
2024-04-13 12:49 ` patch 'doc: update link to Windows DevX in mlx5 guide' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix meter policy priority' " Xueming Li
` (27 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=78d38b5d67cfa6270eccc58c38611336bb0d6cff
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 78d38b5d67cfa6270eccc58c38611336bb0d6cff Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Fri, 1 Mar 2024 08:04:48 +0200
Subject: [PATCH] net/mlx5: fix VLAN ID in flow modify
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b89bfdd9be845b7ecfd50d2e9ec77f5cc2ccf94d ]
PMD uses MODIFY_FIELD to implement standalone OF_SET_VLAN_VID flow
action.
PMD assigned immediate VLAN Id value to the `.level` member of the
`rte_flow_action_modify_data` structure instead of `.value`.
That assignment has worked because both members had the same offset in
the hosting structure.
The patch assigns VLAN Id directly to `.value`
Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7ed0c0ac9b..aff1ab74e3 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -6224,7 +6224,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
rm[set_vlan_vid_ix].conf)->vlan_vid != 0);
const struct rte_flow_action_of_set_vlan_vid *conf =
ra[set_vlan_vid_ix].conf;
- rte_be16_t vid = masked ? conf->vlan_vid : 0;
int width = mlx5_flow_item_field_width(dev, RTE_FLOW_FIELD_VLAN_ID, 0,
NULL, &error);
*spec = (typeof(*spec)) {
@@ -6235,8 +6234,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
- .level = vid,
- .offset = 0,
},
.width = width,
};
@@ -6248,11 +6245,15 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
- .level = masked ? (1U << width) - 1 : 0,
- .offset = 0,
},
.width = 0xffffffff,
};
+ if (masked) {
+ uint32_t mask_val = 0xffffffff;
+
+ rte_memcpy(spec->src.value, &conf->vlan_vid, sizeof(conf->vlan_vid));
+ rte_memcpy(mask->src.value, &mask_val, sizeof(mask_val));
+ }
ra[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD;
ra[set_vlan_vid_ix].conf = spec;
rm[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD;
@@ -6279,8 +6280,6 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
},
.src = {
.field = RTE_FLOW_FIELD_VALUE,
- .level = vid,
- .offset = 0,
},
.width = width,
};
@@ -6289,6 +6288,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
.conf = &conf
};
+ rte_memcpy(conf.src.value, &vid, sizeof(vid));
return flow_hw_modify_field_construct(job, act_data, hw_acts,
&modify_action);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:07.975640342 +0800
+++ 0097-net-mlx5-fix-VLAN-ID-in-flow-modify.patch 2024-04-13 20:43:05.067753840 +0800
@@ -1 +1 @@
-From b89bfdd9be845b7ecfd50d2e9ec77f5cc2ccf94d Mon Sep 17 00:00:00 2001
+From 78d38b5d67cfa6270eccc58c38611336bb0d6cff Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b89bfdd9be845b7ecfd50d2e9ec77f5cc2ccf94d ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index a4e204695e..658f5daf82 100644
+index 7ed0c0ac9b..aff1ab74e3 100644
@@ -28 +30 @@
-@@ -6858,7 +6858,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
+@@ -6224,7 +6224,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
@@ -36 +38 @@
-@@ -6869,8 +6868,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
+@@ -6235,8 +6234,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
@@ -45 +47 @@
-@@ -6882,11 +6879,15 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
+@@ -6248,11 +6245,15 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
@@ -63 +65 @@
-@@ -6913,8 +6914,6 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
+@@ -6279,8 +6280,6 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
@@ -72 +74 @@
-@@ -6923,6 +6922,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
+@@ -6289,6 +6288,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
@@ -77 +79,2 @@
- return flow_hw_modify_field_construct(mhdr_cmd, act_data, hw_acts, &modify_action);
+ return flow_hw_modify_field_construct(job, act_data, hw_acts,
+ &modify_action);
@@ -79 +81,0 @@
-
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix meter policy priority' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (95 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix VLAN ID in flow modify' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: remove duplication of L3 flow item validation' " Xueming Li
` (26 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shun Hao; +Cc: Bing Zhao, Matan Azrad, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a8b06881d95287ac683eaaa536d368bcbc2ac397
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a8b06881d95287ac683eaaa536d368bcbc2ac397 Mon Sep 17 00:00:00 2001
From: Shun Hao <shunh@nvidia.com>
Date: Fri, 1 Mar 2024 10:46:05 +0200
Subject: [PATCH] net/mlx5: fix meter policy priority
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1cfb78d2c40e3b3cf1bad061f21f306272fffd47 ]
Currently a meter policy's flows are always using the same priority for
all colors, so the red color flow might be before green/yellow ones.
This will impact the performance cause green/yellow packets will check
red flow first and got miss, then match green/yellow flows, introducing
more hops.
This patch fixes this by giving the same priority to flows for all
colors.
Fixes: 363db9b00f ("net/mlx5: handle yellow case in default meter policy")
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 41 +++++++++++++++++++--------------
1 file changed, 24 insertions(+), 17 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index c0d9e4fb82..1c85331cb6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -17655,9 +17655,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
}
}
tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl);
- if (priority < RTE_COLOR_RED)
- flow_dv_match_meta_reg(matcher.mask.buf,
- (enum modify_reg)color_reg_c_idx, color_mask, color_mask);
+ flow_dv_match_meta_reg(matcher.mask.buf,
+ (enum modify_reg)color_reg_c_idx, color_mask, color_mask);
matcher.priority = priority;
matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf,
matcher.mask.size);
@@ -17708,7 +17707,6 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
int i;
int ret = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, &flow_err);
struct mlx5_sub_policy_color_rule *color_rule;
- bool svport_match;
struct mlx5_sub_policy_color_rule *tmp_rules[RTE_COLORS] = {NULL};
if (ret < 0)
@@ -17744,10 +17742,9 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
/* No use. */
attr.priority = i;
/* Create matchers for colors. */
- svport_match = (i != RTE_COLOR_RED) ? match_src_port : false;
if (__flow_dv_create_policy_matcher(dev, color_reg_c_idx,
MLX5_MTR_POLICY_MATCHER_PRIO, sub_policy,
- &attr, svport_match, NULL,
+ &attr, match_src_port, NULL,
&color_rule->matcher, &flow_err)) {
DRV_LOG(ERR, "Failed to create color%u matcher.", i);
goto err_exit;
@@ -17757,7 +17754,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
color_reg_c_idx, (enum rte_color)i,
color_rule->matcher,
acts[i].actions_n, acts[i].dv_actions,
- svport_match, NULL, &color_rule->rule,
+ match_src_port, NULL, &color_rule->rule,
&attr)) {
DRV_LOG(ERR, "Failed to create color%u rule.", i);
goto err_exit;
@@ -18640,7 +18637,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
struct {
struct mlx5_flow_meter_policy *fm_policy;
struct mlx5_flow_meter_info *next_fm;
- struct mlx5_sub_policy_color_rule *tag_rule[MLX5_MTR_RTE_COLORS];
+ struct mlx5_sub_policy_color_rule *tag_rule[RTE_COLORS];
} fm_info[MLX5_MTR_CHAIN_MAX_NUM] = { {0} };
uint32_t fm_cnt = 0;
uint32_t i, j;
@@ -18674,14 +18671,22 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
mtr_policy = fm_info[i].fm_policy;
rte_spinlock_lock(&mtr_policy->sl);
sub_policy = mtr_policy->sub_policys[domain][0];
- for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) {
+ for (j = 0; j < RTE_COLORS; j++) {
uint8_t act_n = 0;
- struct mlx5_flow_dv_modify_hdr_resource *modify_hdr;
+ struct mlx5_flow_dv_modify_hdr_resource *modify_hdr = NULL;
struct mlx5_flow_dv_port_id_action_resource *port_action;
+ uint8_t fate_action;
- if (mtr_policy->act_cnt[j].fate_action != MLX5_FLOW_FATE_MTR &&
- mtr_policy->act_cnt[j].fate_action != MLX5_FLOW_FATE_PORT_ID)
- continue;
+ if (j == RTE_COLOR_RED) {
+ fate_action = MLX5_FLOW_FATE_DROP;
+ } else {
+ fate_action = mtr_policy->act_cnt[j].fate_action;
+ modify_hdr = mtr_policy->act_cnt[j].modify_hdr;
+ if (fate_action != MLX5_FLOW_FATE_MTR &&
+ fate_action != MLX5_FLOW_FATE_PORT_ID &&
+ fate_action != MLX5_FLOW_FATE_DROP)
+ continue;
+ }
color_rule = mlx5_malloc(MLX5_MEM_ZERO,
sizeof(struct mlx5_sub_policy_color_rule),
0, SOCKET_ID_ANY);
@@ -18693,9 +18698,8 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
goto err_exit;
}
color_rule->src_port = src_port;
- modify_hdr = mtr_policy->act_cnt[j].modify_hdr;
/* Prepare to create color rule. */
- if (mtr_policy->act_cnt[j].fate_action == MLX5_FLOW_FATE_MTR) {
+ if (fate_action == MLX5_FLOW_FATE_MTR) {
next_fm = fm_info[i].next_fm;
if (mlx5_flow_meter_attach(priv, next_fm, &attr, error)) {
mlx5_free(color_rule);
@@ -18722,7 +18726,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
}
acts.dv_actions[act_n++] = tbl_data->jump.action;
acts.actions_n = act_n;
- } else {
+ } else if (fate_action == MLX5_FLOW_FATE_PORT_ID) {
port_action =
mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID],
mtr_policy->act_cnt[j].rix_port_id_action);
@@ -18735,6 +18739,9 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
acts.dv_actions[act_n++] = modify_hdr->action;
acts.dv_actions[act_n++] = port_action->action;
acts.actions_n = act_n;
+ } else {
+ acts.dv_actions[act_n++] = mtr_policy->dr_drop_action[domain];
+ acts.actions_n = act_n;
}
fm_info[i].tag_rule[j] = color_rule;
TAILQ_INSERT_TAIL(&sub_policy->color_rules[j], color_rule, next_port);
@@ -18766,7 +18773,7 @@ err_exit:
mtr_policy = fm_info[i].fm_policy;
rte_spinlock_lock(&mtr_policy->sl);
sub_policy = mtr_policy->sub_policys[domain][0];
- for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) {
+ for (j = 0; j < RTE_COLORS; j++) {
color_rule = fm_info[i].tag_rule[j];
if (!color_rule)
continue;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.008132500 +0800
+++ 0098-net-mlx5-fix-meter-policy-priority.patch 2024-04-13 20:43:05.077753827 +0800
@@ -1 +1 @@
-From 1cfb78d2c40e3b3cf1bad061f21f306272fffd47 Mon Sep 17 00:00:00 2001
+From a8b06881d95287ac683eaaa536d368bcbc2ac397 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1cfb78d2c40e3b3cf1bad061f21f306272fffd47 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 18f09b22be..f1584ed6e0 100644
+index c0d9e4fb82..1c85331cb6 100644
@@ -29 +31 @@
-@@ -17922,9 +17922,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
+@@ -17655,9 +17655,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
@@ -41 +43 @@
-@@ -17975,7 +17974,6 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
+@@ -17708,7 +17707,6 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
@@ -49 +51 @@
-@@ -18011,10 +18009,9 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
+@@ -17744,10 +17742,9 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
@@ -61 +63 @@
-@@ -18024,7 +18021,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
+@@ -17757,7 +17754,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev,
@@ -70 +72 @@
-@@ -18907,7 +18904,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
+@@ -18640,7 +18637,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
@@ -79 +81 @@
-@@ -18941,14 +18938,22 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
+@@ -18674,14 +18671,22 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
@@ -107 +109 @@
-@@ -18960,9 +18965,8 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
+@@ -18693,9 +18698,8 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
@@ -118 +120 @@
-@@ -18989,7 +18993,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
+@@ -18722,7 +18726,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
@@ -127 +129 @@
-@@ -19002,6 +19006,9 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
+@@ -18735,6 +18739,9 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
@@ -137 +139 @@
-@@ -19033,7 +19040,7 @@ err_exit:
+@@ -18766,7 +18773,7 @@ err_exit:
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: remove duplication of L3 flow item validation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (96 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix meter policy priority' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix IP-in-IP tunnels recognition' " Xueming Li
` (25 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c551015ebb6f2a9615e6e2583603794232665af5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c551015ebb6f2a9615e6e2583603794232665af5 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 29 Feb 2024 18:05:03 +0200
Subject: [PATCH] net/mlx5: remove duplication of L3 flow item validation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 27e44a6f53eccc7d2ce80f6466fa214158f0ee81 ]
Remove code duplications in DV L3 items validation translation.
Fixes: 3193c2494eea ("net/mlx5: fix L4 protocol validation")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 151 +++++++++-----------------------
1 file changed, 43 insertions(+), 108 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 1c85331cb6..eaadbf577f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7248,6 +7248,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev,
return 0;
}
+static __rte_always_inline uint8_t
+mlx5_flow_l3_next_protocol(const struct rte_flow_item *l3_item,
+ enum MLX5_SET_MATCHER key_type)
+{
+#define MLX5_L3_NEXT_PROTOCOL(i, ms) \
+ ((i)->type == RTE_FLOW_ITEM_TYPE_IPV4 ? \
+ ((const struct rte_flow_item_ipv4 *)(i)->ms)->hdr.next_proto_id : \
+ (i)->type == RTE_FLOW_ITEM_TYPE_IPV6 ? \
+ ((const struct rte_flow_item_ipv6 *)(i)->ms)->hdr.proto : \
+ (i)->type == RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT ? \
+ ((const struct rte_flow_item_ipv6_frag_ext *)(i)->ms)->hdr.next_header :\
+ 0xff)
+
+ uint8_t next_protocol;
+
+ if (l3_item->mask != NULL && l3_item->spec != NULL) {
+ next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, mask);
+ if (next_protocol)
+ next_protocol &= MLX5_L3_NEXT_PROTOCOL(l3_item, spec);
+ else
+ next_protocol = 0xff;
+ } else if (key_type == MLX5_SET_MATCHER_HS_M && l3_item->mask != NULL) {
+ next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, mask);
+ } else if (key_type == MLX5_SET_MATCHER_HS_V && l3_item->spec != NULL) {
+ next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, spec);
+ } else {
+ /* Reset for inner layer. */
+ next_protocol = 0xff;
+ }
+ return next_protocol;
+
+#undef MLX5_L3_NEXT_PROTOCOL
+}
+
/**
* Validate IB BTH item.
*
@@ -7530,19 +7564,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
MLX5_FLOW_LAYER_OUTER_L3_IPV4;
- if (items->mask != NULL &&
- ((const struct rte_flow_item_ipv4 *)
- items->mask)->hdr.next_proto_id) {
- next_protocol =
- ((const struct rte_flow_item_ipv4 *)
- (items->spec))->hdr.next_proto_id;
- next_protocol &=
- ((const struct rte_flow_item_ipv4 *)
- (items->mask))->hdr.next_proto_id;
- } else {
- /* Reset for inner layer. */
- next_protocol = 0xff;
- }
+ next_protocol = mlx5_flow_l3_next_protocol
+ (items, (enum MLX5_SET_MATCHER)-1);
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
mlx5_flow_tunnel_ip_check(items, next_protocol,
@@ -7556,22 +7579,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
MLX5_FLOW_LAYER_OUTER_L3_IPV6;
- if (items->mask != NULL &&
- ((const struct rte_flow_item_ipv6 *)
- items->mask)->hdr.proto) {
- item_ipv6_proto =
- ((const struct rte_flow_item_ipv6 *)
- items->spec)->hdr.proto;
- next_protocol =
- ((const struct rte_flow_item_ipv6 *)
- items->spec)->hdr.proto;
- next_protocol &=
- ((const struct rte_flow_item_ipv6 *)
- items->mask)->hdr.proto;
- } else {
- /* Reset for inner layer. */
- next_protocol = 0xff;
- }
+ next_protocol = mlx5_flow_l3_next_protocol
+ (items, (enum MLX5_SET_MATCHER)-1);
break;
case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
ret = flow_dv_validate_item_ipv6_frag_ext(items,
@@ -7582,19 +7591,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
last_item = tunnel ?
MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT :
MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT;
- if (items->mask != NULL &&
- ((const struct rte_flow_item_ipv6_frag_ext *)
- items->mask)->hdr.next_header) {
- next_protocol =
- ((const struct rte_flow_item_ipv6_frag_ext *)
- items->spec)->hdr.next_header;
- next_protocol &=
- ((const struct rte_flow_item_ipv6_frag_ext *)
- items->mask)->hdr.next_header;
- } else {
- /* Reset for inner layer. */
- next_protocol = 0xff;
- }
+ next_protocol = mlx5_flow_l3_next_protocol
+ (items, (enum MLX5_SET_MATCHER)-1);
break;
case RTE_FLOW_ITEM_TYPE_TCP:
ret = mlx5_flow_validate_item_tcp
@@ -13735,28 +13733,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
wks->priority = MLX5_PRIORITY_MAP_L3;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
MLX5_FLOW_LAYER_OUTER_L3_IPV4;
- if (items->mask != NULL &&
- items->spec != NULL &&
- ((const struct rte_flow_item_ipv4 *)
- items->mask)->hdr.next_proto_id) {
- next_protocol =
- ((const struct rte_flow_item_ipv4 *)
- (items->spec))->hdr.next_proto_id;
- next_protocol &=
- ((const struct rte_flow_item_ipv4 *)
- (items->mask))->hdr.next_proto_id;
- } else if (key_type == MLX5_SET_MATCHER_HS_M &&
- items->mask != NULL) {
- next_protocol = ((const struct rte_flow_item_ipv4 *)
- (items->mask))->hdr.next_proto_id;
- } else if (key_type == MLX5_SET_MATCHER_HS_V &&
- items->spec != NULL) {
- next_protocol = ((const struct rte_flow_item_ipv4 *)
- (items->spec))->hdr.next_proto_id;
- } else {
- /* Reset for inner layer. */
- next_protocol = 0xff;
- }
+ next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
mlx5_flow_tunnel_ip_check(items, next_protocol,
@@ -13766,56 +13743,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
wks->priority = MLX5_PRIORITY_MAP_L3;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
MLX5_FLOW_LAYER_OUTER_L3_IPV6;
- if (items->mask != NULL &&
- items->spec != NULL &&
- ((const struct rte_flow_item_ipv6 *)
- items->mask)->hdr.proto) {
- next_protocol =
- ((const struct rte_flow_item_ipv6 *)
- items->spec)->hdr.proto;
- next_protocol &=
- ((const struct rte_flow_item_ipv6 *)
- items->mask)->hdr.proto;
- } else if (key_type == MLX5_SET_MATCHER_HS_M &&
- items->mask != NULL) {
- next_protocol = ((const struct rte_flow_item_ipv6 *)
- (items->mask))->hdr.proto;
- } else if (key_type == MLX5_SET_MATCHER_HS_V &&
- items->spec != NULL) {
- next_protocol = ((const struct rte_flow_item_ipv6 *)
- (items->spec))->hdr.proto;
- } else {
- /* Reset for inner layer. */
- next_protocol = 0xff;
- }
+ next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
break;
case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
flow_dv_translate_item_ipv6_frag_ext
(key, items, tunnel, key_type);
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT :
MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT;
- if (items->mask != NULL &&
- items->spec != NULL &&
- ((const struct rte_flow_item_ipv6_frag_ext *)
- items->mask)->hdr.next_header) {
- next_protocol =
- ((const struct rte_flow_item_ipv6_frag_ext *)
- items->spec)->hdr.next_header;
- next_protocol &=
- ((const struct rte_flow_item_ipv6_frag_ext *)
- items->mask)->hdr.next_header;
- } else if (key_type == MLX5_SET_MATCHER_HS_M &&
- items->mask != NULL) {
- next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *)
- (items->mask))->hdr.next_header;
- } else if (key_type == MLX5_SET_MATCHER_HS_V &&
- items->spec != NULL) {
- next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *)
- (items->spec))->hdr.next_header;
- } else {
- /* Reset for inner layer. */
- next_protocol = 0xff;
- }
+ next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
break;
case RTE_FLOW_ITEM_TYPE_TCP:
flow_dv_translate_item_tcp(key, items, tunnel, key_type);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.039898658 +0800
+++ 0099-net-mlx5-remove-duplication-of-L3-flow-item-validati.patch 2024-04-13 20:43:05.087753814 +0800
@@ -1 +1 @@
-From 27e44a6f53eccc7d2ce80f6466fa214158f0ee81 Mon Sep 17 00:00:00 2001
+From c551015ebb6f2a9615e6e2583603794232665af5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 27e44a6f53eccc7d2ce80f6466fa214158f0ee81 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index f1584ed6e0..9e444c8a1c 100644
+index 1c85331cb6..eaadbf577f 100644
@@ -21 +23 @@
-@@ -7488,6 +7488,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev,
+@@ -7248,6 +7248,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev,
@@ -62 +64 @@
-@@ -7770,19 +7804,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7530,19 +7564,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -84 +86 @@
-@@ -7796,22 +7819,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7556,22 +7579,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -109 +111 @@
-@@ -7822,19 +7831,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7582,19 +7591,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -131 +133 @@
-@@ -13997,28 +13995,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
+@@ -13735,28 +13733,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
@@ -161 +163 @@
-@@ -14028,56 +14005,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
+@@ -13766,56 +13743,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix IP-in-IP tunnels recognition' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (97 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: remove duplication of L3 flow item validation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix DR context release ordering' " Xueming Li
` (24 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9a9f0acac6ac246cbfe95734901fb5b4599bebfd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9a9f0acac6ac246cbfe95734901fb5b4599bebfd Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Thu, 29 Feb 2024 18:05:04 +0200
Subject: [PATCH] net/mlx5: fix IP-in-IP tunnels recognition
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2db234e769e121446b7b6d8e97e00212bebf7a3c ]
The patch fixes IP-in-IP tunnel recognition for the following patterns
/ [ipv4|ipv6] proto is [ipv4|ipv6] / end
/ [ipv4|ipv6] / [ipv4|ipv6] /
Fixes: 3d69434113d1 ("net/mlx5: add Direct Verbs validation function")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 104 ++++++++++++++++++++++++--------
1 file changed, 80 insertions(+), 24 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index eaadbf577f..e36443436e 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -267,21 +267,41 @@ struct field_modify_info modify_tcp[] = {
{0, 0, 0},
};
-static void
+enum mlx5_l3_tunnel_detection {
+ l3_tunnel_none,
+ l3_tunnel_outer,
+ l3_tunnel_inner
+};
+
+static enum mlx5_l3_tunnel_detection
mlx5_flow_tunnel_ip_check(const struct rte_flow_item *item __rte_unused,
- uint8_t next_protocol, uint64_t *item_flags,
- int *tunnel)
+ uint8_t next_protocol, uint64_t item_flags,
+ uint64_t *l3_tunnel_flag)
{
+ enum mlx5_l3_tunnel_detection td = l3_tunnel_none;
+
MLX5_ASSERT(item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
item->type == RTE_FLOW_ITEM_TYPE_IPV6);
- if (next_protocol == IPPROTO_IPIP) {
- *item_flags |= MLX5_FLOW_LAYER_IPIP;
- *tunnel = 1;
- }
- if (next_protocol == IPPROTO_IPV6) {
- *item_flags |= MLX5_FLOW_LAYER_IPV6_ENCAP;
- *tunnel = 1;
+ if ((item_flags & MLX5_FLOW_LAYER_OUTER_L3) == 0) {
+ switch (next_protocol) {
+ case IPPROTO_IPIP:
+ td = l3_tunnel_outer;
+ *l3_tunnel_flag = MLX5_FLOW_LAYER_IPIP;
+ break;
+ case IPPROTO_IPV6:
+ td = l3_tunnel_outer;
+ *l3_tunnel_flag = MLX5_FLOW_LAYER_IPV6_ENCAP;
+ break;
+ default:
+ break;
+ }
+ } else {
+ td = l3_tunnel_inner;
+ *l3_tunnel_flag = item->type == RTE_FLOW_ITEM_TYPE_IPV4 ?
+ MLX5_FLOW_LAYER_IPIP :
+ MLX5_FLOW_LAYER_IPV6_ENCAP;
}
+ return td;
}
static inline struct mlx5_hlist *
@@ -7478,6 +7498,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
is_root = (uint64_t)ret;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ enum mlx5_l3_tunnel_detection l3_tunnel_detection;
+ uint64_t l3_tunnel_flag;
int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
int type = items->type;
@@ -7555,8 +7577,16 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
vlan_m = items->mask;
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
- mlx5_flow_tunnel_ip_check(items, next_protocol,
- &item_flags, &tunnel);
+ next_protocol = mlx5_flow_l3_next_protocol
+ (items, (enum MLX5_SET_MATCHER)-1);
+ l3_tunnel_detection =
+ mlx5_flow_tunnel_ip_check(items, next_protocol,
+ item_flags,
+ &l3_tunnel_flag);
+ if (l3_tunnel_detection == l3_tunnel_inner) {
+ item_flags |= l3_tunnel_flag;
+ tunnel = 1;
+ }
ret = flow_dv_validate_item_ipv4(dev, items, item_flags,
last_item, ether_type,
error);
@@ -7564,12 +7594,20 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
MLX5_FLOW_LAYER_OUTER_L3_IPV4;
- next_protocol = mlx5_flow_l3_next_protocol
- (items, (enum MLX5_SET_MATCHER)-1);
+ if (l3_tunnel_detection == l3_tunnel_outer)
+ item_flags |= l3_tunnel_flag;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
- mlx5_flow_tunnel_ip_check(items, next_protocol,
- &item_flags, &tunnel);
+ next_protocol = mlx5_flow_l3_next_protocol
+ (items, (enum MLX5_SET_MATCHER)-1);
+ l3_tunnel_detection =
+ mlx5_flow_tunnel_ip_check(items, next_protocol,
+ item_flags,
+ &l3_tunnel_flag);
+ if (l3_tunnel_detection == l3_tunnel_inner) {
+ item_flags |= l3_tunnel_flag;
+ tunnel = 1;
+ }
ret = mlx5_flow_validate_item_ipv6(items, item_flags,
last_item,
ether_type,
@@ -7579,8 +7617,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
MLX5_FLOW_LAYER_OUTER_L3_IPV6;
- next_protocol = mlx5_flow_l3_next_protocol
- (items, (enum MLX5_SET_MATCHER)-1);
+ if (l3_tunnel_detection == l3_tunnel_outer)
+ item_flags |= l3_tunnel_flag;
break;
case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
ret = flow_dv_validate_item_ipv6_frag_ext(items,
@@ -13683,6 +13721,8 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL);
int item_type = items->type;
uint64_t last_item = wks->last_item;
+ enum mlx5_l3_tunnel_detection l3_tunnel_detection;
+ uint64_t l3_tunnel_flag;
int ret;
switch (item_type) {
@@ -13726,24 +13766,40 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
- mlx5_flow_tunnel_ip_check(items, next_protocol,
- &wks->item_flags, &tunnel);
+ next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
+ l3_tunnel_detection =
+ mlx5_flow_tunnel_ip_check(items, next_protocol,
+ wks->item_flags,
+ &l3_tunnel_flag);
+ if (l3_tunnel_detection == l3_tunnel_inner) {
+ wks->item_flags |= l3_tunnel_flag;
+ tunnel = 1;
+ }
flow_dv_translate_item_ipv4(key, items, tunnel,
wks->group, key_type);
wks->priority = MLX5_PRIORITY_MAP_L3;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
MLX5_FLOW_LAYER_OUTER_L3_IPV4;
- next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
+ if (l3_tunnel_detection == l3_tunnel_outer)
+ wks->item_flags |= l3_tunnel_flag;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
- mlx5_flow_tunnel_ip_check(items, next_protocol,
- &wks->item_flags, &tunnel);
+ next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
+ l3_tunnel_detection =
+ mlx5_flow_tunnel_ip_check(items, next_protocol,
+ wks->item_flags,
+ &l3_tunnel_flag);
+ if (l3_tunnel_detection == l3_tunnel_inner) {
+ wks->item_flags |= l3_tunnel_flag;
+ tunnel = 1;
+ }
flow_dv_translate_item_ipv6(key, items, tunnel,
wks->group, key_type);
wks->priority = MLX5_PRIORITY_MAP_L3;
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
MLX5_FLOW_LAYER_OUTER_L3_IPV6;
- next_protocol = mlx5_flow_l3_next_protocol(items, key_type);
+ if (l3_tunnel_detection == l3_tunnel_outer)
+ wks->item_flags |= l3_tunnel_flag;
break;
case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
flow_dv_translate_item_ipv6_frag_ext
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.074448313 +0800
+++ 0100-net-mlx5-fix-IP-in-IP-tunnels-recognition.patch 2024-04-13 20:43:05.087753814 +0800
@@ -1 +1 @@
-From 2db234e769e121446b7b6d8e97e00212bebf7a3c Mon Sep 17 00:00:00 2001
+From 9a9f0acac6ac246cbfe95734901fb5b4599bebfd Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2db234e769e121446b7b6d8e97e00212bebf7a3c ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 9e444c8a1c..80239bebee 100644
+index eaadbf577f..e36443436e 100644
@@ -25 +27 @@
-@@ -275,21 +275,41 @@ struct field_modify_info modify_tcp[] = {
+@@ -267,21 +267,41 @@ struct field_modify_info modify_tcp[] = {
@@ -77 +79 @@
-@@ -7718,6 +7738,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7478,6 +7498,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -86 +88 @@
-@@ -7795,8 +7817,16 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7555,8 +7577,16 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -105 +107 @@
-@@ -7804,12 +7834,20 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7564,12 +7594,20 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -130 +132 @@
-@@ -7819,8 +7857,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7579,8 +7617,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
@@ -141 +143 @@
-@@ -13945,6 +13983,8 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
+@@ -13683,6 +13721,8 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
@@ -150 +152 @@
-@@ -13988,24 +14028,40 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
+@@ -13726,24 +13766,40 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix DR context release ordering' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (98 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix IP-in-IP tunnels recognition' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5/hws: fix direct index insert on depend WQE' " Xueming Li
` (23 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Maayan Kashani; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=859bafedf3ae9983e143ee35bcbb8046b91430bc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 859bafedf3ae9983e143ee35bcbb8046b91430bc Mon Sep 17 00:00:00 2001
From: Maayan Kashani <mkashani@nvidia.com>
Date: Wed, 6 Mar 2024 08:02:07 +0200
Subject: [PATCH] net/mlx5: fix DR context release ordering
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d068681b637da6b7857c13711eb1a675b2a341e3 ]
Creating rules on group >0, creates a jump action on the group table.
Non template code releases the group data under shared mlx5dr free code,
And the mlx5dr context was already closed in HWS code.
Remove mlx5dr context release from hws resource release function.
Fixes: b401400db24e ("net/mlx5: add port flow configuration")
Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 7 +++++++
drivers/net/mlx5/mlx5_flow_hw.c | 2 --
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index df0383078d..95f2ed073c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2268,6 +2268,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_indirect_list_handles_release(dev);
#ifdef HAVE_MLX5_HWS_SUPPORT
flow_hw_destroy_vport_action(dev);
+ /* dr context will be closed after mlx5_os_free_shared_dr. */
flow_hw_resource_release(dev);
flow_hw_clear_port_info(dev);
#endif
@@ -2299,6 +2300,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
mlx5_hlist_destroy(priv->mreg_cp_tbl);
mlx5_mprq_free_mp(dev);
mlx5_os_free_shared_dr(priv);
+#ifdef HAVE_MLX5_HWS_SUPPORT
+ if (priv->dr_ctx) {
+ claim_zero(mlx5dr_context_close(priv->dr_ctx));
+ priv->dr_ctx = NULL;
+ }
+#endif
if (priv->rss_conf.rss_key != NULL)
mlx5_free(priv->rss_conf.rss_key);
if (priv->reta_idx != NULL)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index aff1ab74e3..9de5552bc8 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -9756,13 +9756,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
}
mlx5_free(priv->hw_q);
priv->hw_q = NULL;
- claim_zero(mlx5dr_context_close(priv->dr_ctx));
if (priv->shared_host) {
struct mlx5_priv *host_priv = priv->shared_host->data->dev_private;
__atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED);
priv->shared_host = NULL;
}
- priv->dr_ctx = NULL;
mlx5_free(priv->hw_attr);
priv->hw_attr = NULL;
priv->nb_queue = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.112838763 +0800
+++ 0101-net-mlx5-fix-DR-context-release-ordering.patch 2024-04-13 20:43:05.097753801 +0800
@@ -1 +1 @@
-From d068681b637da6b7857c13711eb1a675b2a341e3 Mon Sep 17 00:00:00 2001
+From 859bafedf3ae9983e143ee35bcbb8046b91430bc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d068681b637da6b7857c13711eb1a675b2a341e3 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 39dc1830d1..8b54843a43 100644
+index df0383078d..95f2ed073c 100644
@@ -26 +28 @@
-@@ -2355,6 +2355,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
+@@ -2268,6 +2268,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
@@ -33,2 +35,2 @@
- if (priv->tlv_options != NULL) {
-@@ -2391,6 +2392,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
+ #endif
+@@ -2299,6 +2300,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
@@ -48 +50 @@
-index 817461017f..c89bd00fb0 100644
+index aff1ab74e3..9de5552bc8 100644
@@ -51 +53 @@
-@@ -10734,13 +10734,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
+@@ -9756,13 +9756,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: fix direct index insert on depend WQE' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (99 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix DR context release ordering' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix template clean up of FDB control flow rule' " Xueming Li
` (22 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Alex Vesker; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3735e8e88c770b49a1588263b2ff5c6bf8b1d9a9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3735e8e88c770b49a1588263b2ff5c6bf8b1d9a9 Mon Sep 17 00:00:00 2001
From: Alex Vesker <valex@nvidia.com>
Date: Wed, 6 Mar 2024 21:21:47 +0100
Subject: [PATCH] net/mlx5/hws: fix direct index insert on depend WQE
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e28392b3adab117a00e780a508049fd3df3d1492 ]
In case a depend WQE was required and direct index was
needed we would not set the direct index on the dep_wqe.
This leads to incorrect insertion to index zero.
Fixes: 38b5bf6452a6 ("net/mlx5/hws: support insert/distribute RTC properties")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_rule.c | 15 ++++++++-------
drivers/net/mlx5/hws/mlx5dr_send.c | 1 +
drivers/net/mlx5/hws/mlx5dr_send.h | 1 +
3 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index e39137a6ee..77245ad97d 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -58,14 +58,16 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
struct mlx5dr_rule *rule,
const struct rte_flow_item *items,
struct mlx5dr_match_template *mt,
- void *user_data)
+ struct mlx5dr_rule_attr *attr)
{
struct mlx5dr_matcher *matcher = rule->matcher;
struct mlx5dr_table *tbl = matcher->tbl;
bool skip_rx, skip_tx;
dep_wqe->rule = rule;
- dep_wqe->user_data = user_data;
+ dep_wqe->user_data = attr->user_data;
+ dep_wqe->direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
+ attr->rule_idx : 0;
if (!items) { /* rule update */
dep_wqe->rtc_0 = rule->rtc_0;
@@ -292,8 +294,8 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
}
mlx5dr_rule_create_init(rule, &ste_attr, &apply, false);
- mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data);
- mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data);
+ mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr);
+ mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr);
ste_attr.direct_index = 0;
ste_attr.rtc_0 = match_wqe.rtc_0;
@@ -398,7 +400,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
* dep_wqe buffers (ctrl, data) are also reused for all STE writes.
*/
dep_wqe = mlx5dr_send_add_new_dep_wqe(queue);
- mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data);
+ mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr);
ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl;
ste_attr.wqe_data = &dep_wqe->wqe_data;
@@ -460,8 +462,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
ste_attr.used_id_rtc_1 = &rule->rtc_1;
ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0;
ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1;
- ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
- attr->rule_idx : 0;
+ ste_attr.direct_index = dep_wqe->direct_index;
} else {
apply.next_direct_idx = --ste_attr.direct_index;
}
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index 622d574bfa..4c279ba42a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -50,6 +50,7 @@ void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue)
ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1;
ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl;
ste_attr.wqe_data = &dep_wqe->wqe_data;
+ ste_attr.direct_index = dep_wqe->direct_index;
mlx5dr_send_ste(queue, &ste_attr);
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h
index c1e8616f7e..c4eaea52ab 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.h
+++ b/drivers/net/mlx5/hws/mlx5dr_send.h
@@ -106,6 +106,7 @@ struct mlx5dr_send_ring_dep_wqe {
uint32_t rtc_1;
uint32_t retry_rtc_0;
uint32_t retry_rtc_1;
+ uint32_t direct_index;
void *user_data;
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.143061923 +0800
+++ 0102-net-mlx5-hws-fix-direct-index-insert-on-depend-WQE.patch 2024-04-13 20:43:05.097753801 +0800
@@ -1 +1 @@
-From e28392b3adab117a00e780a508049fd3df3d1492 Mon Sep 17 00:00:00 2001
+From 3735e8e88c770b49a1588263b2ff5c6bf8b1d9a9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e28392b3adab117a00e780a508049fd3df3d1492 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index aa00c54e53..f14e1e6ecd 100644
+index e39137a6ee..77245ad97d 100644
@@ -44 +46 @@
-@@ -374,8 +376,8 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
+@@ -292,8 +294,8 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
@@ -55 +57 @@
-@@ -482,7 +484,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
+@@ -398,7 +400,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
@@ -64 +66 @@
-@@ -544,8 +546,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
+@@ -460,8 +462,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
@@ -75 +77 @@
-index 64138279a1..f749401c6f 100644
+index 622d574bfa..4c279ba42a 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix template clean up of FDB control flow rule' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (100 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5/hws: fix direct index insert on depend WQE' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix flow configure validation' " Xueming Li
` (21 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b1749f6ed202bc6413c51119aac17a32b41c09eb
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b1749f6ed202bc6413c51119aac17a32b41c09eb Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Wed, 6 Mar 2024 21:21:48 +0100
Subject: [PATCH] net/mlx5: fix template clean up of FDB control flow rule
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 48db3b61c3b81c6efcd343b7929a000eb998cb0b ]
This patch refactors the creation and clean up of templates used for
FDB control flow rules, when HWS is enabled.
All pattern and actions templates, and template tables are stored
in a separate structure, `mlx5_flow_hw_ctrl_fdb`. It is allocated
if and only if E-Switch is enabled.
During HWS clean up, all of these templates are explicitly destroyed,
instead of relying on templates general templates clean up.
Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS")
Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_flow.h | 19 +++
drivers/net/mlx5/mlx5_flow_hw.c | 255 ++++++++++++++++++--------------
3 files changed, 166 insertions(+), 114 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e2c6fe0d00..33401d96e4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1870,11 +1870,7 @@ struct mlx5_priv {
rte_spinlock_t hw_ctrl_lock;
LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows;
LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows;
- struct rte_flow_template_table *hw_esw_sq_miss_root_tbl;
- struct rte_flow_template_table *hw_esw_sq_miss_tbl;
- struct rte_flow_template_table *hw_esw_zero_tbl;
- struct rte_flow_template_table *hw_tx_meta_cpy_tbl;
- struct rte_flow_template_table *hw_lacp_rx_tbl;
+ struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb;
struct rte_flow_pattern_template *hw_tx_repr_tagging_pt;
struct rte_flow_actions_template *hw_tx_repr_tagging_at;
struct rte_flow_template_table *hw_tx_repr_tagging_tbl;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index edc273c518..7a5e334a83 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2446,6 +2446,25 @@ struct mlx5_flow_hw_ctrl_rx {
[MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX];
};
+/* Contains all templates required for control flow rules in FDB with HWS. */
+struct mlx5_flow_hw_ctrl_fdb {
+ struct rte_flow_pattern_template *esw_mgr_items_tmpl;
+ struct rte_flow_actions_template *regc_jump_actions_tmpl;
+ struct rte_flow_template_table *hw_esw_sq_miss_root_tbl;
+ struct rte_flow_pattern_template *regc_sq_items_tmpl;
+ struct rte_flow_actions_template *port_actions_tmpl;
+ struct rte_flow_template_table *hw_esw_sq_miss_tbl;
+ struct rte_flow_pattern_template *port_items_tmpl;
+ struct rte_flow_actions_template *jump_one_actions_tmpl;
+ struct rte_flow_template_table *hw_esw_zero_tbl;
+ struct rte_flow_pattern_template *tx_meta_items_tmpl;
+ struct rte_flow_actions_template *tx_meta_actions_tmpl;
+ struct rte_flow_template_table *hw_tx_meta_cpy_tbl;
+ struct rte_flow_pattern_template *lacp_rx_items_tmpl;
+ struct rte_flow_actions_template *lacp_rx_actions_tmpl;
+ struct rte_flow_template_table *hw_lacp_rx_tbl;
+};
+
#define MLX5_CTRL_PROMISCUOUS (RTE_BIT32(0))
#define MLX5_CTRL_ALL_MULTICAST (RTE_BIT32(1))
#define MLX5_CTRL_BROADCAST (RTE_BIT32(2))
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 9de5552bc8..938d9b5824 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8445,6 +8445,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev,
return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, error);
}
+/**
+ * Cleans up all template tables and pattern, and actions templates used for
+ * FDB control flow rules.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ */
+static void
+flow_hw_cleanup_ctrl_fdb_tables(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb;
+
+ if (!priv->hw_ctrl_fdb)
+ return;
+ hw_ctrl_fdb = priv->hw_ctrl_fdb;
+ /* Clean up templates used for LACP default miss table. */
+ if (hw_ctrl_fdb->hw_lacp_rx_tbl)
+ claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_lacp_rx_tbl, NULL));
+ if (hw_ctrl_fdb->lacp_rx_actions_tmpl)
+ claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->lacp_rx_actions_tmpl,
+ NULL));
+ if (hw_ctrl_fdb->lacp_rx_items_tmpl)
+ claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->lacp_rx_items_tmpl,
+ NULL));
+ /* Clean up templates used for default Tx metadata copy. */
+ if (hw_ctrl_fdb->hw_tx_meta_cpy_tbl)
+ claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_tx_meta_cpy_tbl, NULL));
+ if (hw_ctrl_fdb->tx_meta_actions_tmpl)
+ claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->tx_meta_actions_tmpl,
+ NULL));
+ if (hw_ctrl_fdb->tx_meta_items_tmpl)
+ claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->tx_meta_items_tmpl,
+ NULL));
+ /* Clean up templates used for default FDB jump rule. */
+ if (hw_ctrl_fdb->hw_esw_zero_tbl)
+ claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_zero_tbl, NULL));
+ if (hw_ctrl_fdb->jump_one_actions_tmpl)
+ claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->jump_one_actions_tmpl,
+ NULL));
+ if (hw_ctrl_fdb->port_items_tmpl)
+ claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->port_items_tmpl,
+ NULL));
+ /* Clean up templates used for default SQ miss flow rules - non-root table. */
+ if (hw_ctrl_fdb->hw_esw_sq_miss_tbl)
+ claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_tbl, NULL));
+ if (hw_ctrl_fdb->regc_sq_items_tmpl)
+ claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->regc_sq_items_tmpl,
+ NULL));
+ if (hw_ctrl_fdb->port_actions_tmpl)
+ claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->port_actions_tmpl,
+ NULL));
+ /* Clean up templates used for default SQ miss flow rules - root table. */
+ if (hw_ctrl_fdb->hw_esw_sq_miss_root_tbl)
+ claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, NULL));
+ if (hw_ctrl_fdb->regc_jump_actions_tmpl)
+ claim_zero(flow_hw_actions_template_destroy(dev,
+ hw_ctrl_fdb->regc_jump_actions_tmpl, NULL));
+ if (hw_ctrl_fdb->esw_mgr_items_tmpl)
+ claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->esw_mgr_items_tmpl,
+ NULL));
+ /* Clean up templates structure for FDB control flow rules. */
+ mlx5_free(hw_ctrl_fdb);
+ priv->hw_ctrl_fdb = NULL;
+}
+
/*
* Create a table on the root group to for the LACP traffic redirecting.
*
@@ -8494,110 +8560,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev,
* @return
* 0 on success, negative values otherwise
*/
-static __rte_unused int
+static int
flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL;
- struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL;
- struct rte_flow_pattern_template *port_items_tmpl = NULL;
- struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL;
- struct rte_flow_pattern_template *lacp_rx_items_tmpl = NULL;
- struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL;
- struct rte_flow_actions_template *port_actions_tmpl = NULL;
- struct rte_flow_actions_template *jump_one_actions_tmpl = NULL;
- struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL;
- struct rte_flow_actions_template *lacp_rx_actions_tmpl = NULL;
+ struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb;
uint32_t xmeta = priv->sh->config.dv_xmeta_en;
uint32_t repr_matching = priv->sh->config.repr_matching;
- int ret;
+ MLX5_ASSERT(priv->hw_ctrl_fdb == NULL);
+ hw_ctrl_fdb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hw_ctrl_fdb), 0, SOCKET_ID_ANY);
+ if (!hw_ctrl_fdb) {
+ DRV_LOG(ERR, "port %u failed to allocate memory for FDB control flow templates",
+ dev->data->port_id);
+ rte_errno = ENOMEM;
+ goto err;
+ }
+ priv->hw_ctrl_fdb = hw_ctrl_fdb;
/* Create templates and table for default SQ miss flow rules - root table. */
- esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error);
- if (!esw_mgr_items_tmpl) {
+ hw_ctrl_fdb->esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error);
+ if (!hw_ctrl_fdb->esw_mgr_items_tmpl) {
DRV_LOG(ERR, "port %u failed to create E-Switch Manager item"
" template for control flows", dev->data->port_id);
goto err;
}
- regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev, error);
- if (!regc_jump_actions_tmpl) {
+ hw_ctrl_fdb->regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template
+ (dev, error);
+ if (!hw_ctrl_fdb->regc_jump_actions_tmpl) {
DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template"
" for control flows", dev->data->port_id);
goto err;
}
- MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL);
- priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table
- (dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl, error);
- if (!priv->hw_esw_sq_miss_root_tbl) {
+ hw_ctrl_fdb->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table
+ (dev, hw_ctrl_fdb->esw_mgr_items_tmpl, hw_ctrl_fdb->regc_jump_actions_tmpl,
+ error);
+ if (!hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) {
DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)"
" for control flows", dev->data->port_id);
goto err;
}
/* Create templates and table for default SQ miss flow rules - non-root table. */
- regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error);
- if (!regc_sq_items_tmpl) {
+ hw_ctrl_fdb->regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error);
+ if (!hw_ctrl_fdb->regc_sq_items_tmpl) {
DRV_LOG(ERR, "port %u failed to create SQ item template for"
" control flows", dev->data->port_id);
goto err;
}
- port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error);
- if (!port_actions_tmpl) {
+ hw_ctrl_fdb->port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error);
+ if (!hw_ctrl_fdb->port_actions_tmpl) {
DRV_LOG(ERR, "port %u failed to create port action template"
" for control flows", dev->data->port_id);
goto err;
}
- MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL);
- priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl,
- port_actions_tmpl, error);
- if (!priv->hw_esw_sq_miss_tbl) {
+ hw_ctrl_fdb->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table
+ (dev, hw_ctrl_fdb->regc_sq_items_tmpl, hw_ctrl_fdb->port_actions_tmpl,
+ error);
+ if (!hw_ctrl_fdb->hw_esw_sq_miss_tbl) {
DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)"
" for control flows", dev->data->port_id);
goto err;
}
/* Create templates and table for default FDB jump flow rules. */
- port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error);
- if (!port_items_tmpl) {
+ hw_ctrl_fdb->port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error);
+ if (!hw_ctrl_fdb->port_items_tmpl) {
DRV_LOG(ERR, "port %u failed to create SQ item template for"
" control flows", dev->data->port_id);
goto err;
}
- jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
+ hw_ctrl_fdb->jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
(dev, MLX5_HW_LOWEST_USABLE_GROUP, error);
- if (!jump_one_actions_tmpl) {
+ if (!hw_ctrl_fdb->jump_one_actions_tmpl) {
DRV_LOG(ERR, "port %u failed to create jump action template"
" for control flows", dev->data->port_id);
goto err;
}
- MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL);
- priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl,
- jump_one_actions_tmpl,
- error);
- if (!priv->hw_esw_zero_tbl) {
+ hw_ctrl_fdb->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table
+ (dev, hw_ctrl_fdb->port_items_tmpl, hw_ctrl_fdb->jump_one_actions_tmpl,
+ error);
+ if (!hw_ctrl_fdb->hw_esw_zero_tbl) {
DRV_LOG(ERR, "port %u failed to create table for default jump to group 1"
" for control flows", dev->data->port_id);
goto err;
}
/* Create templates and table for default Tx metadata copy flow rule. */
if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) {
- tx_meta_items_tmpl =
+ hw_ctrl_fdb->tx_meta_items_tmpl =
flow_hw_create_tx_default_mreg_copy_pattern_template(dev, error);
- if (!tx_meta_items_tmpl) {
+ if (!hw_ctrl_fdb->tx_meta_items_tmpl) {
DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern"
" template for control flows", dev->data->port_id);
goto err;
}
- tx_meta_actions_tmpl =
+ hw_ctrl_fdb->tx_meta_actions_tmpl =
flow_hw_create_tx_default_mreg_copy_actions_template(dev, error);
- if (!tx_meta_actions_tmpl) {
+ if (!hw_ctrl_fdb->tx_meta_actions_tmpl) {
DRV_LOG(ERR, "port %u failed to Tx metadata copy actions"
" template for control flows", dev->data->port_id);
goto err;
}
- MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL);
- priv->hw_tx_meta_cpy_tbl =
- flow_hw_create_tx_default_mreg_copy_table(dev, tx_meta_items_tmpl,
- tx_meta_actions_tmpl, error);
- if (!priv->hw_tx_meta_cpy_tbl) {
+ hw_ctrl_fdb->hw_tx_meta_cpy_tbl =
+ flow_hw_create_tx_default_mreg_copy_table
+ (dev, hw_ctrl_fdb->tx_meta_items_tmpl,
+ hw_ctrl_fdb->tx_meta_actions_tmpl, error);
+ if (!hw_ctrl_fdb->hw_tx_meta_cpy_tbl) {
DRV_LOG(ERR, "port %u failed to create table for default"
" Tx metadata copy flow rule", dev->data->port_id);
goto err;
@@ -8605,71 +8670,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
}
/* Create LACP default miss table. */
if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) {
- lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error);
- if (!lacp_rx_items_tmpl) {
+ hw_ctrl_fdb->lacp_rx_items_tmpl =
+ flow_hw_create_lacp_rx_pattern_template(dev, error);
+ if (!hw_ctrl_fdb->lacp_rx_items_tmpl) {
DRV_LOG(ERR, "port %u failed to create pattern template"
" for LACP Rx traffic", dev->data->port_id);
goto err;
}
- lacp_rx_actions_tmpl = flow_hw_create_lacp_rx_actions_template(dev, error);
- if (!lacp_rx_actions_tmpl) {
+ hw_ctrl_fdb->lacp_rx_actions_tmpl =
+ flow_hw_create_lacp_rx_actions_template(dev, error);
+ if (!hw_ctrl_fdb->lacp_rx_actions_tmpl) {
DRV_LOG(ERR, "port %u failed to create actions template"
" for LACP Rx traffic", dev->data->port_id);
goto err;
}
- priv->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table(dev, lacp_rx_items_tmpl,
- lacp_rx_actions_tmpl, error);
- if (!priv->hw_lacp_rx_tbl) {
+ hw_ctrl_fdb->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table
+ (dev, hw_ctrl_fdb->lacp_rx_items_tmpl,
+ hw_ctrl_fdb->lacp_rx_actions_tmpl, error);
+ if (!hw_ctrl_fdb->hw_lacp_rx_tbl) {
DRV_LOG(ERR, "port %u failed to create template table for"
" for LACP Rx traffic", dev->data->port_id);
goto err;
}
}
return 0;
+
err:
- /* Do not overwrite the rte_errno. */
- ret = -rte_errno;
- if (ret == 0)
- ret = rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Failed to create control tables.");
- if (priv->hw_tx_meta_cpy_tbl) {
- flow_hw_table_destroy(dev, priv->hw_tx_meta_cpy_tbl, NULL);
- priv->hw_tx_meta_cpy_tbl = NULL;
- }
- if (priv->hw_esw_zero_tbl) {
- flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL);
- priv->hw_esw_zero_tbl = NULL;
- }
- if (priv->hw_esw_sq_miss_tbl) {
- flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL);
- priv->hw_esw_sq_miss_tbl = NULL;
- }
- if (priv->hw_esw_sq_miss_root_tbl) {
- flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL);
- priv->hw_esw_sq_miss_root_tbl = NULL;
- }
- if (lacp_rx_actions_tmpl)
- flow_hw_actions_template_destroy(dev, lacp_rx_actions_tmpl, NULL);
- if (tx_meta_actions_tmpl)
- flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL);
- if (jump_one_actions_tmpl)
- flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL);
- if (port_actions_tmpl)
- flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL);
- if (regc_jump_actions_tmpl)
- flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL);
- if (lacp_rx_items_tmpl)
- flow_hw_pattern_template_destroy(dev, lacp_rx_items_tmpl, NULL);
- if (tx_meta_items_tmpl)
- flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL);
- if (port_items_tmpl)
- flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL);
- if (regc_sq_items_tmpl)
- flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL);
- if (esw_mgr_items_tmpl)
- flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL);
- return ret;
+ flow_hw_cleanup_ctrl_fdb_tables(dev);
+ return -EINVAL;
}
static void
@@ -9640,6 +9668,7 @@ err:
}
mlx5_flow_quota_destroy(dev);
flow_hw_destroy_send_to_kernel_action(priv);
+ flow_hw_cleanup_ctrl_fdb_tables(dev);
flow_hw_free_vport_actions(priv);
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
if (priv->hw_drop[i])
@@ -9698,6 +9727,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
return;
flow_hw_rxq_flag_set(dev, false);
flow_hw_flush_all_ctrl_flows(dev);
+ flow_hw_cleanup_ctrl_fdb_tables(dev);
flow_hw_cleanup_tx_repr_tagging(dev);
flow_hw_cleanup_ctrl_rx_tables(dev);
while (!LIST_EMPTY(&priv->flow_hw_grp)) {
@@ -11983,8 +12013,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
proxy_port_id, port_id);
return 0;
}
- if (!proxy_priv->hw_esw_sq_miss_root_tbl ||
- !proxy_priv->hw_esw_sq_miss_tbl) {
+ if (!proxy_priv->hw_ctrl_fdb ||
+ !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl ||
+ !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) {
DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but "
"default flow tables were not created.",
proxy_port_id, port_id);
@@ -12016,7 +12047,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
actions[2] = (struct rte_flow_action) {
.type = RTE_FLOW_ACTION_TYPE_END,
};
- ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl,
+ ret = flow_hw_create_ctrl_flow(dev, proxy_dev,
+ proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl,
items, 0, actions, 0, &flow_info, external);
if (ret) {
DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d",
@@ -12047,7 +12079,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
.type = RTE_FLOW_ACTION_TYPE_END,
};
flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS;
- ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl,
+ ret = flow_hw_create_ctrl_flow(dev, proxy_dev,
+ proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl,
items, 0, actions, 0, &flow_info, external);
if (ret) {
DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d",
@@ -12093,8 +12126,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn)
proxy_priv = proxy_dev->data->dev_private;
if (!proxy_priv->dr_ctx)
return 0;
- if (!proxy_priv->hw_esw_sq_miss_root_tbl ||
- !proxy_priv->hw_esw_sq_miss_tbl)
+ if (!proxy_priv->hw_ctrl_fdb ||
+ !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl ||
+ !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl)
return 0;
cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows);
while (cf != NULL) {
@@ -12161,7 +12195,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
proxy_port_id, port_id);
return 0;
}
- if (!proxy_priv->hw_esw_zero_tbl) {
+ if (!proxy_priv->hw_ctrl_fdb || !proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl) {
DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but "
"default flow tables were not created.",
proxy_port_id, port_id);
@@ -12169,7 +12203,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
return -rte_errno;
}
return flow_hw_create_ctrl_flow(dev, proxy_dev,
- proxy_priv->hw_esw_zero_tbl,
+ proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl,
items, 0, actions, 0, &flow_info, false);
}
@@ -12221,10 +12255,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
};
MLX5_ASSERT(priv->master);
- if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl)
+ if (!priv->dr_ctx ||
+ !priv->hw_ctrl_fdb ||
+ !priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl)
return 0;
return flow_hw_create_ctrl_flow(dev, dev,
- priv->hw_tx_meta_cpy_tbl,
+ priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl,
eth_all, 0, copy_reg_action, 0, &flow_info, false);
}
@@ -12316,10 +12352,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
.type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX,
};
- if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl)
+ if (!priv->dr_ctx || !priv->hw_ctrl_fdb || !priv->hw_ctrl_fdb->hw_lacp_rx_tbl)
return 0;
- return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0,
- miss_action, 0, &flow_info, false);
+ return flow_hw_create_ctrl_flow(dev, dev,
+ priv->hw_ctrl_fdb->hw_lacp_rx_tbl,
+ eth_lacp, 0, miss_action, 0, &flow_info, false);
}
static uint32_t
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.169094189 +0800
+++ 0103-net-mlx5-fix-template-clean-up-of-FDB-control-flow-r.patch 2024-04-13 20:43:05.097753801 +0800
@@ -1 +1 @@
-From 48db3b61c3b81c6efcd343b7929a000eb998cb0b Mon Sep 17 00:00:00 2001
+From b1749f6ed202bc6413c51119aac17a32b41c09eb Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 48db3b61c3b81c6efcd343b7929a000eb998cb0b ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 6ff8f322e0..0091a2459c 100644
+index e2c6fe0d00..33401d96e4 100644
@@ -30 +32 @@
-@@ -1894,11 +1894,7 @@ struct mlx5_priv {
+@@ -1870,11 +1870,7 @@ struct mlx5_priv {
@@ -44 +46 @@
-index ff3830a888..34b5e0f45b 100644
+index edc273c518..7a5e334a83 100644
@@ -47 +49 @@
-@@ -2775,6 +2775,25 @@ struct mlx5_flow_hw_ctrl_rx {
+@@ -2446,6 +2446,25 @@ struct mlx5_flow_hw_ctrl_rx {
@@ -74 +76 @@
-index a96c829045..feeb071b4b 100644
+index 9de5552bc8..938d9b5824 100644
@@ -77 +79 @@
-@@ -9363,6 +9363,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev,
+@@ -8445,6 +8445,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev,
@@ -150 +152 @@
-@@ -9412,110 +9478,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev,
+@@ -8494,110 +8560,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev,
@@ -306 +308 @@
-@@ -9523,71 +9588,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
+@@ -8605,71 +8670,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
@@ -391,2 +393,2 @@
-@@ -10619,6 +10647,7 @@ err:
- action_template_drop_release(dev);
+@@ -9640,6 +9668,7 @@ err:
+ }
@@ -399,2 +401,2 @@
-@@ -10681,6 +10710,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
- dev->flow_fp_ops = &rte_flow_fp_default_ops;
+@@ -9698,6 +9727,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
+ return;
@@ -406,2 +408,2 @@
- action_template_drop_release(dev);
-@@ -13259,8 +13289,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
+ while (!LIST_EMPTY(&priv->flow_hw_grp)) {
+@@ -11983,8 +12013,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
@@ -419 +421 @@
-@@ -13292,7 +13323,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
+@@ -12016,7 +12047,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
@@ -429 +431 @@
-@@ -13323,7 +13355,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
+@@ -12047,7 +12079,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
@@ -439 +441 @@
-@@ -13369,8 +13402,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn)
+@@ -12093,8 +12126,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn)
@@ -451 +453 @@
-@@ -13437,7 +13471,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
+@@ -12161,7 +12195,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
@@ -460 +462 @@
-@@ -13445,7 +13479,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
+@@ -12169,7 +12203,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
@@ -469 +471 @@
-@@ -13497,10 +13531,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
+@@ -12221,10 +12255,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
@@ -484 +486 @@
-@@ -13592,10 +13628,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
+@@ -12316,10 +12352,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix flow configure validation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (101 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix template clean up of FDB control flow rule' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: prevent ioctl failure log flooding' " Xueming Li
` (20 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=8117b4b2f7fa51f50686aa90939b8d8ac41a4ddc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 8117b4b2f7fa51f50686aa90939b8d8ac41a4ddc Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Wed, 6 Mar 2024 21:21:50 +0100
Subject: [PATCH] net/mlx5: fix flow configure validation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ff9433b578195be8c6cb44443ad199defdbf3c99 ]
There's an existing limitation in mlx5 PMD, that all configured flow
queues must have the same size. Even though this condition is checked,
some allocations are done before that. This lead to segmentation
fault during rollback on error in rte_flow_configure() implementation.
This patch fixes that by reorganizing validation, so that configuration
options are validated before any allocations are done and
necessary checks for NULL are added to error rollback.
Bugzilla ID: 1199
Fixes: b401400db24e ("net/mlx5: add port flow configuration")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 58 +++++++++++++++++++++++----------
1 file changed, 41 insertions(+), 17 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 938d9b5824..a54075ed7e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -9291,6 +9291,38 @@ flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr,
return true;
}
+static int
+flow_hw_validate_attributes(const struct rte_flow_port_attr *port_attr,
+ uint16_t nb_queue,
+ const struct rte_flow_queue_attr *queue_attr[],
+ struct rte_flow_error *error)
+{
+ uint32_t size;
+ unsigned int i;
+
+ if (port_attr == NULL)
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Port attributes must be non-NULL");
+
+ if (nb_queue == 0)
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "At least one flow queue is required");
+
+ if (queue_attr == NULL)
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Queue attributes must be non-NULL");
+
+ size = queue_attr[0]->size;
+ for (i = 1; i < nb_queue; ++i) {
+ if (queue_attr[i]->size != size)
+ return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "All flow queues must have the same size");
+ }
+
+ return 0;
+}
+
/**
* Configure port HWS resources.
*
@@ -9342,10 +9374,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
int ret = 0;
uint32_t action_flags;
- if (!port_attr || !nb_queue || !queue_attr) {
- rte_errno = EINVAL;
- goto err;
- }
+ if (flow_hw_validate_attributes(port_attr, nb_queue, queue_attr, error))
+ return -rte_errno;
/*
* Calling rte_flow_configure() again is allowed if and only if
* provided configuration matches the initially provided one.
@@ -9392,14 +9422,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
/* Allocate the queue job descriptor LIFO. */
mem_size = sizeof(priv->hw_q[0]) * nb_q_updated;
for (i = 0; i < nb_q_updated; i++) {
- /*
- * Check if the queues' size are all the same as the
- * limitation from HWS layer.
- */
- if (_queue_attr[i]->size != _queue_attr[0]->size) {
- rte_errno = EINVAL;
- goto err;
- }
mem_size += (sizeof(struct mlx5_hw_q_job *) +
sizeof(struct mlx5_hw_q_job) +
sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN +
@@ -9681,12 +9703,14 @@ err:
flow_hw_destroy_vlan(dev);
if (dr_ctx)
claim_zero(mlx5dr_context_close(dr_ctx));
- for (i = 0; i < nb_q_updated; i++) {
- rte_ring_free(priv->hw_q[i].indir_iq);
- rte_ring_free(priv->hw_q[i].indir_cq);
+ if (priv->hw_q) {
+ for (i = 0; i < nb_q_updated; i++) {
+ rte_ring_free(priv->hw_q[i].indir_iq);
+ rte_ring_free(priv->hw_q[i].indir_cq);
+ }
+ mlx5_free(priv->hw_q);
+ priv->hw_q = NULL;
}
- mlx5_free(priv->hw_q);
- priv->hw_q = NULL;
if (priv->acts_ipool) {
mlx5_ipool_destroy(priv->acts_ipool);
priv->acts_ipool = NULL;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.204017644 +0800
+++ 0104-net-mlx5-fix-flow-configure-validation.patch 2024-04-13 20:43:05.107753788 +0800
@@ -1 +1 @@
-From ff9433b578195be8c6cb44443ad199defdbf3c99 Mon Sep 17 00:00:00 2001
+From 8117b4b2f7fa51f50686aa90939b8d8ac41a4ddc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ff9433b578195be8c6cb44443ad199defdbf3c99 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -22,2 +24,2 @@
- drivers/net/mlx5/mlx5_flow_hw.c | 62 +++++++++++++++++++++++----------
- 1 file changed, 43 insertions(+), 19 deletions(-)
+ drivers/net/mlx5/mlx5_flow_hw.c | 58 +++++++++++++++++++++++----------
+ 1 file changed, 41 insertions(+), 17 deletions(-)
@@ -26 +28 @@
-index d88959e36d..35f1ed7a03 100644
+index 938d9b5824..a54075ed7e 100644
@@ -29,2 +31,2 @@
-@@ -10289,6 +10289,38 @@ mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char
- RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+@@ -9291,6 +9291,38 @@ flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr,
+ return true;
@@ -68 +70 @@
-@@ -10340,10 +10372,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
+@@ -9342,10 +9374,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
@@ -81 +83 @@
-@@ -10390,14 +10420,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
+@@ -9392,14 +9422,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
@@ -94,6 +96,6 @@
- sizeof(struct mlx5_hw_q_job)) * _queue_attr[i]->size;
- }
-@@ -10679,14 +10701,16 @@ err:
- __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED);
- priv->shared_host = NULL;
- }
+ sizeof(struct mlx5_hw_q_job) +
+ sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN +
+@@ -9681,12 +9703,14 @@ err:
+ flow_hw_destroy_vlan(dev);
+ if (dr_ctx)
+ claim_zero(mlx5dr_context_close(dr_ctx));
@@ -103,2 +104,0 @@
-- rte_ring_free(priv->hw_q[i].flow_transfer_pending);
-- rte_ring_free(priv->hw_q[i].flow_transfer_completed);
@@ -109,2 +108,0 @@
-+ rte_ring_free(priv->hw_q[i].flow_transfer_pending);
-+ rte_ring_free(priv->hw_q[i].flow_transfer_completed);
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: prevent ioctl failure log flooding' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (102 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix flow configure validation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix age position in hairpin split' " Xueming Li
` (19 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Eli Britstein; +Cc: Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fc12ccc0475d754caab8eb9d06c8e88d68e98d6d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fc12ccc0475d754caab8eb9d06c8e88d68e98d6d Mon Sep 17 00:00:00 2001
From: Eli Britstein <elibr@nvidia.com>
Date: Thu, 7 Mar 2024 08:13:45 +0200
Subject: [PATCH] net/mlx5: prevent ioctl failure log flooding
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 84ba1440c5dff8d716c1a2643aa3eb5e806619ff ]
The following log is printed in WARNING severity:
mlx5_net: port 1 ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed:
Operation not supported
Reduce the severity to DEBUG to prevent this log from flooding
when there are hundreds of ports probed without supporting this
flow ctrl query.
Fixes: 1256805dd54d ("net/mlx5: move Linux-specific functions")
Signed-off-by: Eli Britstein <elibr@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_ethdev_os.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 0ee8c58ba7..4f3e790c0b 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -671,7 +671,7 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
ifr.ifr_data = (void *)ðpause;
ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr);
if (ret) {
- DRV_LOG(WARNING,
+ DRV_LOG(DEBUG,
"port %u ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed:"
" %s",
dev->data->port_id, strerror(rte_errno));
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.241112995 +0800
+++ 0105-net-mlx5-prevent-ioctl-failure-log-flooding.patch 2024-04-13 20:43:05.107753788 +0800
@@ -1 +1 @@
-From 84ba1440c5dff8d716c1a2643aa3eb5e806619ff Mon Sep 17 00:00:00 2001
+From fc12ccc0475d754caab8eb9d06c8e88d68e98d6d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 84ba1440c5dff8d716c1a2643aa3eb5e806619ff ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index e1bc3f7c2e..1f511d6e00 100644
+index 0ee8c58ba7..4f3e790c0b 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix age position in hairpin split' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (103 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: prevent ioctl failure log flooding' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix drop action release timing' " Xueming Li
` (18 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Bing Zhao; +Cc: Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9aba4dee4d43bfc96d4a15be0b77c9fb9301d513
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9aba4dee4d43bfc96d4a15be0b77c9fb9301d513 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Thu, 7 Mar 2024 10:09:24 +0200
Subject: [PATCH] net/mlx5: fix age position in hairpin split
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4c89815eab7471b98388dc958b95777d341f05fc ]
When splitting a hairpin rule implicitly, the count action will
be on either Tx or Rx subflow based on the encapsulation checking.
Once there is a flow rule with both count and age action, one counter
will be reused. If there is only age action and the ASO flow hit is
supported, the flow hit will be chosen instead of a counter.
In the previous flow splitting, the age would always be in the Rx
part, while the count would be on the Tx part when there is an encap.
Before this commit, 2 issues can be observed with a hairpin split:
1. On the root table, one counter was used on both Rx and Tx parts
for age and count actions. Then one ingress packet will be
counted twice.
2. On the non-root table, an extra ASO flow hit was used on the Rx
part. This would cause some overhead.
The age and count actions should be in the same subflow instead of 2.
Fixes: daed4b6e3db2 ("net/mlx5: use aging by counter when counter exists")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 3 +--
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ee210549e7..ccfd189c1f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -5492,6 +5492,7 @@ flow_hairpin_split(struct rte_eth_dev *dev,
}
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
+ case RTE_FLOW_ACTION_TYPE_AGE:
if (encap) {
rte_memcpy(actions_tx, actions,
sizeof(struct rte_flow_action));
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index e36443436e..7688d97813 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -19094,8 +19094,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev,
LIST_FOREACH(act, &age_info->aged_aso, next) {
nb_flows++;
if (nb_contexts) {
- context[nb_flows - 1] =
- act->age_params.context;
+ context[nb_flows - 1] = act->age_params.context;
if (!(--nb_contexts))
break;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.265894063 +0800
+++ 0106-net-mlx5-fix-age-position-in-hairpin-split.patch 2024-04-13 20:43:05.117753775 +0800
@@ -1 +1 @@
-From 4c89815eab7471b98388dc958b95777d341f05fc Mon Sep 17 00:00:00 2001
+From 9aba4dee4d43bfc96d4a15be0b77c9fb9301d513 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4c89815eab7471b98388dc958b95777d341f05fc ]
@@ -26 +28,0 @@
-Cc: stable@dpdk.org
@@ -36 +38 @@
-index 6484874c35..f31fdfbf3d 100644
+index ee210549e7..ccfd189c1f 100644
@@ -39 +41 @@
-@@ -5399,6 +5399,7 @@ flow_hairpin_split(struct rte_eth_dev *dev,
+@@ -5492,6 +5492,7 @@ flow_hairpin_split(struct rte_eth_dev *dev,
@@ -48 +50 @@
-index 80239bebee..4badde1a9a 100644
+index e36443436e..7688d97813 100644
@@ -51 +53 @@
-@@ -19361,8 +19361,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev,
+@@ -19094,8 +19094,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix drop action release timing' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (104 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix age position in hairpin split' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix warning about copy length' " Xueming Li
` (17 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Bing Zhao; +Cc: Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2864fd3102de0b12ee38816c850b320482ecfb92
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2864fd3102de0b12ee38816c850b320482ecfb92 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Fri, 8 Mar 2024 05:22:37 +0200
Subject: [PATCH] net/mlx5: fix drop action release timing
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 22a3761b782b7c46ca428209b15b4f7382a40a62 ]
When creating the drop action Devx object, the global counter set
is also used as in the regular or hairpin queue creation.
The drop action should be destroyed before the global counter set
release procedure. Or else, the counter set object is still
referenced and cannot be released successfully. This would cause
the counter set resources to be exhausted after starting and stopping
the ports repeatedly.
Fixes: 65b3cd0dc39b ("net/mlx5: create global drop action")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 95f2ed073c..417e88e848 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2290,12 +2290,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
priv->txqs = NULL;
}
mlx5_proc_priv_uninit(dev);
+ if (priv->drop_queue.hrxq)
+ mlx5_drop_action_destroy(dev);
if (priv->q_counters) {
mlx5_devx_cmd_destroy(priv->q_counters);
priv->q_counters = NULL;
}
- if (priv->drop_queue.hrxq)
- mlx5_drop_action_destroy(dev);
if (priv->mreg_cp_tbl)
mlx5_hlist_destroy(priv->mreg_cp_tbl);
mlx5_mprq_free_mp(dev);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.302722515 +0800
+++ 0107-net-mlx5-fix-drop-action-release-timing.patch 2024-04-13 20:43:05.117753775 +0800
@@ -1 +1 @@
-From 22a3761b782b7c46ca428209b15b4f7382a40a62 Mon Sep 17 00:00:00 2001
+From 2864fd3102de0b12ee38816c850b320482ecfb92 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 22a3761b782b7c46ca428209b15b4f7382a40a62 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 8b54843a43..d1a63822a5 100644
+index 95f2ed073c..417e88e848 100644
@@ -28 +30 @@
-@@ -2382,12 +2382,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
+@@ -2290,12 +2290,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix warning about copy length' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (105 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix drop action release timing' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/bnxt: fix number of Tx queues being created' " Xueming Li
` (16 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Morten Brørup; +Cc: Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5a8ca987e9aff8b1ca4c3d3fc477a56c33ce4bbd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5a8ca987e9aff8b1ca4c3d3fc477a56c33ce4bbd Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>
Date: Mon, 16 Jan 2023 14:07:23 +0100
Subject: [PATCH] net/mlx5: fix warning about copy length
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c19580fb8e9ad6f153d46f731ec7cd2050b3021b ]
Use RTE_PTR_ADD where copying to the offset of a field in a structure
holding multiple fields, to avoid compiler warnings with decorated
rte_memcpy.
Fixes: 16a7dbc4f690 ("net/mlx5: make flow modify action list thread safe")
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 7688d97813..b3ccc2063c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5965,7 +5965,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx)
"cannot allocate resource memory");
return NULL;
}
- rte_memcpy(&entry->ft_type,
+ rte_memcpy(RTE_PTR_ADD(entry, offsetof(typeof(*entry), ft_type)),
RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)),
key_len + data_len);
if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.328523681 +0800
+++ 0108-net-mlx5-fix-warning-about-copy-length.patch 2024-04-13 20:43:05.127753762 +0800
@@ -1 +1 @@
-From c19580fb8e9ad6f153d46f731ec7cd2050b3021b Mon Sep 17 00:00:00 2001
+From 5a8ca987e9aff8b1ca4c3d3fc477a56c33ce4bbd Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c19580fb8e9ad6f153d46f731ec7cd2050b3021b ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 4badde1a9a..d434c678c8 100644
+index 7688d97813..b3ccc2063c 100644
@@ -26 +28 @@
-@@ -6205,7 +6205,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx)
+@@ -5965,7 +5965,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/bnxt: fix number of Tx queues being created' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (106 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix warning about copy length' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'examples/ipsec-secgw: fix Rx queue ID in Rx callback' " Xueming Li
` (15 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Kishore Padmanabha; +Cc: Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0a44e64c41c87bfd8c4e01dd37b0ec43c3d5f509
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0a44e64c41c87bfd8c4e01dd37b0ec43c3d5f509 Mon Sep 17 00:00:00 2001
From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Date: Mon, 13 Nov 2023 11:08:52 -0500
Subject: [PATCH] net/bnxt: fix number of Tx queues being created
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 05b67582cc93128bbf2eb26726d781b8c5c561b3 ]
The number of Tx queues for the representor port is limited by
number of Rx rings instead of Tx rings.
Fixes: 322bd6e70272 ("net/bnxt: add port representor infrastructure")
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_reps.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index a7b75b543e..6d6b8252e2 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -739,10 +739,10 @@ int bnxt_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
struct bnxt_tx_queue *parent_txq, *txq;
struct bnxt_vf_rep_tx_queue *vfr_txq;
- if (queue_idx >= rep_bp->rx_nr_rings) {
+ if (queue_idx >= rep_bp->tx_nr_rings) {
PMD_DRV_LOG(ERR,
"Cannot create Tx rings %d. %d rings available\n",
- queue_idx, rep_bp->rx_nr_rings);
+ queue_idx, rep_bp->tx_nr_rings);
return -EINVAL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.361514438 +0800
+++ 0109-net-bnxt-fix-number-of-Tx-queues-being-created.patch 2024-04-13 20:43:05.127753762 +0800
@@ -1 +1 @@
-From 05b67582cc93128bbf2eb26726d781b8c5c561b3 Mon Sep 17 00:00:00 2001
+From 0a44e64c41c87bfd8c4e01dd37b0ec43c3d5f509 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 05b67582cc93128bbf2eb26726d781b8c5c561b3 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index edcc27f556..79b3583636 100644
+index a7b75b543e..6d6b8252e2 100644
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'examples/ipsec-secgw: fix Rx queue ID in Rx callback' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (107 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/bnxt: fix number of Tx queues being created' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'doc: fix default IP fragments maximum in programmer guide' " Xueming Li
` (14 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shihong Wang; +Cc: Chaoyong He, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f7909e3c75d813035a9f8238df9be11d96483667
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f7909e3c75d813035a9f8238df9be11d96483667 Mon Sep 17 00:00:00 2001
From: Shihong Wang <shihong.wang@corigine.com>
Date: Mon, 11 Mar 2024 10:32:47 +0800
Subject: [PATCH] examples/ipsec-secgw: fix Rx queue ID in Rx callback
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 179e9b44ac6d64dc10ee3116e44b66f12d43e7a8 ]
The Rx queue ID on the core and on the port are not necessarily
equal, for example, there are two RX queues on core0, queue0 and
queue1, queue0 is the rx_queueid0 on port0 and queue1 is the
rx_queueid0 on port1.
The 'rte_eth_add_rx_callback()' function is based on the port
registration callback function, so should be the RX queue ID on
the port.
Fixes: d04bb1c52647 ("examples/ipsec-secgw: use HW parsed packet type in poll mode")
Signed-off-by: Shihong Wang <shihong.wang@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
examples/ipsec-secgw/ipsec-secgw.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a61bea400a..45a303850d 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2093,10 +2093,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads,
/* Register Rx callback if ptypes are not supported */
if (!ptype_supported &&
- !rte_eth_add_rx_callback(portid, queue,
+ !rte_eth_add_rx_callback(portid, rx_queueid,
parse_ptype_cb, NULL)) {
printf("Failed to add rx callback: port=%d, "
- "queue=%d\n", portid, queue);
+ "rx_queueid=%d\n", portid, rx_queueid);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.386382405 +0800
+++ 0110-examples-ipsec-secgw-fix-Rx-queue-ID-in-Rx-callback.patch 2024-04-13 20:43:05.127753762 +0800
@@ -1 +1 @@
-From 179e9b44ac6d64dc10ee3116e44b66f12d43e7a8 Mon Sep 17 00:00:00 2001
+From f7909e3c75d813035a9f8238df9be11d96483667 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 179e9b44ac6d64dc10ee3116e44b66f12d43e7a8 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: fix default IP fragments maximum in programmer guide' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (108 preceding siblings ...)
2024-04-13 12:49 ` patch 'examples/ipsec-secgw: fix Rx queue ID in Rx callback' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/nfp: fix uninitialized variable' " Xueming Li
` (13 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Simei Su; +Cc: Tyler Retzlaff, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a5ac9baa7a5d6aae1d79193ce25cd0725173e5b8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a5ac9baa7a5d6aae1d79193ce25cd0725173e5b8 Mon Sep 17 00:00:00 2001
From: Simei Su <simei.su@intel.com>
Date: Fri, 5 Jan 2024 10:44:17 +0800
Subject: [PATCH] doc: fix default IP fragments maximum in programmer guide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0219d467bcb1d19b386b5bae8eecd3514ba13fdb ]
Update documentation value to match default value in code base.
Fixes: f8e0f8ce9030 ("ip_frag: increase default maximum of fragments")
Signed-off-by: Simei Su <simei.su@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
doc/guides/prog_guide/ip_fragment_reassembly_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
index 314d4adbb8..b14289eb73 100644
--- a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
+++ b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
@@ -43,7 +43,7 @@ Note that all update/lookup operations on Fragment Table are not thread safe.
So if different execution contexts (threads/processes) will access the same table simultaneously,
then some external syncing mechanism have to be provided.
-Each table entry can hold information about packets consisting of up to RTE_LIBRTE_IP_FRAG_MAX (by default: 4) fragments.
+Each table entry can hold information about packets consisting of up to RTE_LIBRTE_IP_FRAG_MAX (by default: 8) fragments.
Code example, that demonstrates creation of a new Fragment table:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.416490366 +0800
+++ 0111-doc-fix-default-IP-fragments-maximum-in-programmer-g.patch 2024-04-13 20:43:05.127753762 +0800
@@ -1 +1 @@
-From 0219d467bcb1d19b386b5bae8eecd3514ba13fdb Mon Sep 17 00:00:00 2001
+From a5ac9baa7a5d6aae1d79193ce25cd0725173e5b8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0219d467bcb1d19b386b5bae8eecd3514ba13fdb ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/nfp: fix uninitialized variable' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (109 preceding siblings ...)
2024-04-13 12:49 ` patch 'doc: fix default IP fragments maximum in programmer guide' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'app/testpmd: fix auto-completion for indirect action list' " Xueming Li
` (12 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Chaoyong He; +Cc: Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4aa1a6420482dccab127369b01d51d1825fdd71d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4aa1a6420482dccab127369b01d51d1825fdd71d Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Tue, 19 Mar 2024 16:55:07 +0800
Subject: [PATCH] net/nfp: fix uninitialized variable
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 77cb47144932a52c34f1825d90a8804c2538817b ]
CI found in the logic of 'nfp_aesgcm_iv_update()', the variable
'cfg_iv' may be used uninitialized in some case.
Coverity issue: 415808
Fixes: 7e13f2dc603e ("net/nfp: fix IPsec data endianness")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/nfp_ipsec.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_ipsec.c b/drivers/net/nfp/nfp_ipsec.c
index aebdbb2f48..b10cda570b 100644
--- a/drivers/net/nfp/nfp_ipsec.c
+++ b/drivers/net/nfp/nfp_ipsec.c
@@ -523,7 +523,7 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
char *iv_b;
char *iv_str;
const rte_be32_t *iv_value;
- uint8_t cfg_iv[NFP_ESP_IV_LENGTH];
+ uint8_t cfg_iv[NFP_ESP_IV_LENGTH] = {};
iv_str = strdup(iv_string);
if (iv_str == NULL) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.445391928 +0800
+++ 0112-net-nfp-fix-uninitialized-variable.patch 2024-04-13 20:43:05.137753749 +0800
@@ -1 +1 @@
-From 77cb47144932a52c34f1825d90a8804c2538817b Mon Sep 17 00:00:00 2001
+From 4aa1a6420482dccab127369b01d51d1825fdd71d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 77cb47144932a52c34f1825d90a8804c2538817b ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 205d1d594c..647bc2bb6d 100644
+index aebdbb2f48..b10cda570b 100644
@@ -24 +26 @@
-@@ -526,7 +526,7 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
+@@ -523,7 +523,7 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'app/testpmd: fix auto-completion for indirect action list' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (110 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/nfp: fix uninitialized variable' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/ena: fix mbuf double free in fast free mode' " Xueming Li
` (11 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shani Peretz; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f7d1b5cff39f423c2afcb4f8a12c9e2abb63bd50
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f7d1b5cff39f423c2afcb4f8a12c9e2abb63bd50 Mon Sep 17 00:00:00 2001
From: Shani Peretz <shperetz@nvidia.com>
Date: Mon, 18 Mar 2024 11:21:09 +0200
Subject: [PATCH] app/testpmd: fix auto-completion for indirect action list
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit aaa5b54d3890cf32fa1eabbcb3e306ebdc4f938c ]
In the process of auto completion of a command in testpmd,
the parser splits the command into tokens, where each token
represents an argument and defines a parsing function.
The parsing function of the indirect_list action argument was returning
before having the opportunity to handle the argument.
The fix ensures that the function appropriately handles
the argument before finishing.
Fixes: 72a3dec7126f ("ethdev: add indirect flow list action")
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
.mailmap | 1 +
app/test-pmd/cmdline_flow.c | 46 ++++++++++++++++++++-----------------
2 files changed, 26 insertions(+), 21 deletions(-)
diff --git a/.mailmap b/.mailmap
index 3b32923fef..57e72894c0 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1281,6 +1281,7 @@ Shahed Shaikh <shshaikh@marvell.com> <shahed.shaikh@cavium.com>
Shai Brandes <shaibran@amazon.com>
Shailendra Bhatnagar <shailendra.bhatnagar@intel.com>
Shally Verma <shallyv@marvell.com> <shally.verma@caviumnetworks.com>
+Shani Peretz <shperetz@nvidia.com>
Shannon Nelson <snelson@pensando.io>
Shannon Zhao <zhaoshenglong@huawei.com>
Shaopeng He <shaopeng.he@intel.com>
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index b19b3205f0..681924b379 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -7395,11 +7395,13 @@ static const struct token token_list[] = {
.type = "UNSIGNED",
.help = "unsigned integer value",
.call = parse_indlst_id2ptr,
+ .comp = comp_none,
},
[INDIRECT_LIST_ACTION_ID2PTR_CONF] = {
.type = "UNSIGNED",
.help = "unsigned integer value",
.call = parse_indlst_id2ptr,
+ .comp = comp_none,
},
[ACTION_SHARED_INDIRECT] = {
.name = "shared_indirect",
@@ -11334,34 +11336,36 @@ parse_indlst_id2ptr(struct context *ctx, const struct token *token,
uint32_t id;
int ret;
- if (!action)
- return -1;
ctx->objdata = 0;
ctx->object = &id;
ctx->objmask = NULL;
ret = parse_int(ctx, token, str, len, ctx->object, sizeof(id));
+ ctx->object = action;
if (ret != (int)len)
return ret;
- ctx->object = action;
- action_conf = (void *)(uintptr_t)action->conf;
- action_conf->conf = NULL;
- switch (ctx->curr) {
- case INDIRECT_LIST_ACTION_ID2PTR_HANDLE:
- action_conf->handle = (typeof(action_conf->handle))
- port_action_handle_get_by_id(ctx->port, id);
- if (!action_conf->handle) {
- printf("no indirect list handle for id %u\n", id);
- return -1;
+
+ /* set handle and conf */
+ if (action) {
+ action_conf = (void *)(uintptr_t)action->conf;
+ action_conf->conf = NULL;
+ switch (ctx->curr) {
+ case INDIRECT_LIST_ACTION_ID2PTR_HANDLE:
+ action_conf->handle = (typeof(action_conf->handle))
+ port_action_handle_get_by_id(ctx->port, id);
+ if (!action_conf->handle) {
+ printf("no indirect list handle for id %u\n", id);
+ return -1;
+ }
+ break;
+ case INDIRECT_LIST_ACTION_ID2PTR_CONF:
+ indlst_conf = indirect_action_list_conf_get(id);
+ if (!indlst_conf)
+ return -1;
+ action_conf->conf = (const void **)indlst_conf->conf;
+ break;
+ default:
+ break;
}
- break;
- case INDIRECT_LIST_ACTION_ID2PTR_CONF:
- indlst_conf = indirect_action_list_conf_get(id);
- if (!indlst_conf)
- return -1;
- action_conf->conf = (const void **)indlst_conf->conf;
- break;
- default:
- break;
}
return ret;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.477437587 +0800
+++ 0113-app-testpmd-fix-auto-completion-for-indirect-action-.patch 2024-04-13 20:43:05.137753749 +0800
@@ -1 +1 @@
-From aaa5b54d3890cf32fa1eabbcb3e306ebdc4f938c Mon Sep 17 00:00:00 2001
+From f7d1b5cff39f423c2afcb4f8a12c9e2abb63bd50 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit aaa5b54d3890cf32fa1eabbcb3e306ebdc4f938c ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index e189477a2f..a0b19832b5 100644
+index 3b32923fef..57e72894c0 100644
@@ -29 +31 @@
-@@ -1293,6 +1293,7 @@ Shahed Shaikh <shshaikh@marvell.com> <shahed.shaikh@cavium.com>
+@@ -1281,6 +1281,7 @@ Shahed Shaikh <shshaikh@marvell.com> <shahed.shaikh@cavium.com>
@@ -38 +40 @@
-index fd6c51f72d..60ee9337cf 100644
+index b19b3205f0..681924b379 100644
@@ -41 +43 @@
-@@ -7839,11 +7839,13 @@ static const struct token token_list[] = {
+@@ -7395,11 +7395,13 @@ static const struct token token_list[] = {
@@ -55 +57 @@
-@@ -11912,34 +11914,36 @@ parse_indlst_id2ptr(struct context *ctx, const struct token *token,
+@@ -11334,34 +11336,36 @@ parse_indlst_id2ptr(struct context *ctx, const struct token *token,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ena: fix mbuf double free in fast free mode' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (111 preceding siblings ...)
2024-04-13 12:49 ` patch 'app/testpmd: fix auto-completion for indirect action list' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/vmxnet3: ignore Rx queue interrupt setup on FreeBSD' " Xueming Li
` (10 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Shai Brandes; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d2d309e5cf0ea384c13c6357a4a0f2b08816214e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d2d309e5cf0ea384c13c6357a4a0f2b08816214e Mon Sep 17 00:00:00 2001
From: Shai Brandes <shaibran@amazon.com>
Date: Wed, 20 Mar 2024 16:52:32 +0200
Subject: [PATCH] net/ena: fix mbuf double free in fast free mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 237407f0c3dff2d0a6bbeb5f35c8b32fe5c8afed ]
Fixed an issue of double free of mbufs which is exposed
in mbuf fast free mode when handling multi-mbuf packets.
The faulty patch mishandled free of non-head mbufs as it
iterated over linked mbufs and collected them into an array,
which was then passed to rte_pktmbuf_free_bulk.
However, rte_pktmbuf_free_bulk already performs an internal iteration
over mbufs that are linked together which led to double free.
Fixes: 89b081e154c5 ("net/ena: fix fast mbuf free")
Signed-off-by: Shai Brandes <shaibran@amazon.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/ena/ena_ethdev.c | 39 +++++++++++-------------------------
1 file changed, 12 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 1e138849cc..e122a55fa4 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -37,10 +37,10 @@
#define ENA_MIN_RING_DESC 128
/*
- * We should try to keep ENA_CLEANUP_BUF_SIZE lower than
+ * We should try to keep ENA_CLEANUP_BUF_THRESH lower than
* RTE_MEMPOOL_CACHE_MAX_SIZE, so we can fit this in mempool local cache.
*/
-#define ENA_CLEANUP_BUF_SIZE 256
+#define ENA_CLEANUP_BUF_THRESH 256
#define ENA_PTYPE_HAS_HASH (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP)
@@ -3105,32 +3105,12 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf)
return 0;
}
-static __rte_always_inline size_t
-ena_tx_cleanup_mbuf_fast(struct rte_mbuf **mbufs_to_clean,
- struct rte_mbuf *mbuf,
- size_t mbuf_cnt,
- size_t buf_size)
-{
- struct rte_mbuf *m_next;
-
- while (mbuf != NULL) {
- m_next = mbuf->next;
- mbufs_to_clean[mbuf_cnt++] = mbuf;
- if (mbuf_cnt == buf_size) {
- rte_pktmbuf_free_bulk(mbufs_to_clean, mbuf_cnt);
- mbuf_cnt = 0;
- }
- mbuf = m_next;
- }
-
- return mbuf_cnt;
-}
-
static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
{
- struct rte_mbuf *mbufs_to_clean[ENA_CLEANUP_BUF_SIZE];
+ struct rte_mbuf *pkts_to_clean[ENA_CLEANUP_BUF_THRESH];
struct ena_ring *tx_ring = (struct ena_ring *)txp;
size_t mbuf_cnt = 0;
+ size_t pkt_cnt = 0;
unsigned int total_tx_descs = 0;
unsigned int total_tx_pkts = 0;
uint16_t cleanup_budget;
@@ -3161,8 +3141,13 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
mbuf = tx_info->mbuf;
if (fast_free) {
- mbuf_cnt = ena_tx_cleanup_mbuf_fast(mbufs_to_clean, mbuf, mbuf_cnt,
- ENA_CLEANUP_BUF_SIZE);
+ pkts_to_clean[pkt_cnt++] = mbuf;
+ mbuf_cnt += mbuf->nb_segs;
+ if (mbuf_cnt >= ENA_CLEANUP_BUF_THRESH) {
+ rte_pktmbuf_free_bulk(pkts_to_clean, pkt_cnt);
+ mbuf_cnt = 0;
+ pkt_cnt = 0;
+ }
} else {
rte_pktmbuf_free(mbuf);
}
@@ -3185,7 +3170,7 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
}
if (mbuf_cnt != 0)
- rte_pktmbuf_free_bulk(mbufs_to_clean, mbuf_cnt);
+ rte_pktmbuf_free_bulk(pkts_to_clean, pkt_cnt);
/* Notify completion handler that full cleanup was performed */
if (free_pkt_cnt == 0 || total_tx_pkts < cleanup_budget)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.518477733 +0800
+++ 0114-net-ena-fix-mbuf-double-free-in-fast-free-mode.patch 2024-04-13 20:43:05.137753749 +0800
@@ -1 +1 @@
-From 237407f0c3dff2d0a6bbeb5f35c8b32fe5c8afed Mon Sep 17 00:00:00 2001
+From d2d309e5cf0ea384c13c6357a4a0f2b08816214e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 237407f0c3dff2d0a6bbeb5f35c8b32fe5c8afed ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 7b697c150a..66fc287faf 100644
+index 1e138849cc..e122a55fa4 100644
@@ -28,2 +30,2 @@
-@@ -48,10 +48,10 @@
- #define MAX_WIDE_LLQ_DEPTH_UNSUPPORTED 0
+@@ -37,10 +37,10 @@
+ #define ENA_MIN_RING_DESC 128
@@ -41 +43 @@
-@@ -3180,32 +3180,12 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf)
+@@ -3105,32 +3105,12 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf)
@@ -76 +78 @@
-@@ -3236,8 +3216,13 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
+@@ -3161,8 +3141,13 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
@@ -92 +94 @@
-@@ -3260,7 +3245,7 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
+@@ -3185,7 +3170,7 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/vmxnet3: ignore Rx queue interrupt setup on FreeBSD' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (112 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/ena: fix mbuf double free in fast free mode' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/igc: fix timesync disable' " Xueming Li
` (9 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Tom Jones; +Cc: Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=07fde8240d6ad3bbf6a42b515d12d8a1bec650ca
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 07fde8240d6ad3bbf6a42b515d12d8a1bec650ca Mon Sep 17 00:00:00 2001
From: Tom Jones <thj@freebsd.org>
Date: Thu, 21 Mar 2024 10:31:33 +0000
Subject: [PATCH] net/vmxnet3: ignore Rx queue interrupt setup on FreeBSD
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 19fede459e0d22f5ac891366465dce07e68196bc ]
Interrupts disabled on FreeBSD for the vmxnet3 driver as they are not
supported. Rx queue interrupts were missed by this change, don't
attempt to enable them on FreeBSD.
Without this change applications enabling interrupts encounter an
immediate abort on FreeBSD.
Fixes: 40d5676ff1ea ("net/vmxnet3: fix initialization on FreeBSD")
Signed-off-by: Tom Jones <thj@freebsd.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
.mailmap | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 2 ++
2 files changed, 3 insertions(+)
diff --git a/.mailmap b/.mailmap
index 57e72894c0..69ef5145a9 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1446,6 +1446,7 @@ Tomasz Kulasek <tomaszx.kulasek@intel.com>
Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tom Barbette <barbette@kth.se> <tom.barbette@ulg.ac.be>
Tom Crugnale <tcrugnale@sandvine.com>
+Tom Jones <thj@freebsd.org>
Tom Millington <tmillington@solarflare.com>
Tom Rix <trix@redhat.com>
Tomer Shmilovich <tshmilovich@nvidia.com>
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 7032f0e324..70ae9c6035 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1932,11 +1932,13 @@ done:
static int
vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
+#ifndef RTE_EXEC_ENV_FREEBSD
struct vmxnet3_hw *hw = dev->data->dev_private;
vmxnet3_enable_intr(hw,
rte_intr_vec_list_index_get(dev->intr_handle,
queue_id));
+#endif
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.557236082 +0800
+++ 0115-net-vmxnet3-ignore-Rx-queue-interrupt-setup-on-FreeB.patch 2024-04-13 20:43:05.147753736 +0800
@@ -1 +1 @@
-From 19fede459e0d22f5ac891366465dce07e68196bc Mon Sep 17 00:00:00 2001
+From 07fde8240d6ad3bbf6a42b515d12d8a1bec650ca Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 19fede459e0d22f5ac891366465dce07e68196bc ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index a0b19832b5..491af1f6ff 100644
+index 57e72894c0..69ef5145a9 100644
@@ -27 +29 @@
-@@ -1458,6 +1458,7 @@ Tomasz Kulasek <tomaszx.kulasek@intel.com>
+@@ -1446,6 +1446,7 @@ Tomasz Kulasek <tomaszx.kulasek@intel.com>
@@ -36 +38 @@
-index 2707b25148..ce7c347254 100644
+index 7032f0e324..70ae9c6035 100644
@@ -39 +41 @@
-@@ -1936,11 +1936,13 @@ done:
+@@ -1932,11 +1932,13 @@ done:
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/igc: fix timesync disable' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (113 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/vmxnet3: ignore Rx queue interrupt setup on FreeBSD' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5/hws: fix memory access in L3 decapsulation' " Xueming Li
` (8 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Wenwu Ma; +Cc: Tingting Liao, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0291a1f49edd9b9cecdc41f2989200a7e061d9c9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0291a1f49edd9b9cecdc41f2989200a7e061d9c9 Mon Sep 17 00:00:00 2001
From: Wenwu Ma <wenwux.ma@intel.com>
Date: Fri, 15 Mar 2024 09:06:31 +0800
Subject: [PATCH] net/igc: fix timesync disable
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6e4b297c22baad9c347fbf00dc7589a0a077dd00 ]
When disabling timesync, we should clear the IGC_RXPBS_CFG_TS_EN bit
of IGC_RXPBS, the patch fixes this.
Fixes: 4f6fbbf6f17d ("net/igc: support IEEE 1588 PTP")
Signed-off-by: Wenwu Ma <wenwux.ma@intel.com>
Tested-by: Tingting Liao <tingtingx.liao@intel.com>
---
.mailmap | 1 +
drivers/net/igc/igc_ethdev.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 69ef5145a9..24c4ad3b85 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1434,6 +1434,7 @@ Timothy Redaelli <tredaelli@redhat.com>
Tim Shearer <tim.shearer@overturenetworks.com>
Ting-Kai Ku <ting-kai.ku@intel.com>
Ting Xu <ting.xu@intel.com>
+Tingting Liao <tingtingx.liao@intel.com>
Tiwei Bie <tiwei.bie@intel.com> <btw@mail.ustc.edu.cn>
Todd Fujinaka <todd.fujinaka@intel.com>
Tomasz Cel <tomaszx.cel@intel.com>
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 58c4f80927..690736b6d1 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -2853,7 +2853,7 @@ eth_igc_timesync_disable(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_TSYNCRXCTL, 0);
val = IGC_READ_REG(hw, IGC_RXPBS);
- val &= IGC_RXPBS_CFG_TS_EN;
+ val &= ~IGC_RXPBS_CFG_TS_EN;
IGC_WRITE_REG(hw, IGC_RXPBS, val);
val = IGC_READ_REG(hw, IGC_SRRCTL(0));
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.586245344 +0800
+++ 0116-net-igc-fix-timesync-disable.patch 2024-04-13 20:43:05.147753736 +0800
@@ -1 +1 @@
-From 6e4b297c22baad9c347fbf00dc7589a0a077dd00 Mon Sep 17 00:00:00 2001
+From 0291a1f49edd9b9cecdc41f2989200a7e061d9c9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6e4b297c22baad9c347fbf00dc7589a0a077dd00 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 491af1f6ff..f53fb59513 100644
+index 69ef5145a9..24c4ad3b85 100644
@@ -23 +25 @@
-@@ -1446,6 +1446,7 @@ Timothy Redaelli <tredaelli@redhat.com>
+@@ -1434,6 +1434,7 @@ Timothy Redaelli <tredaelli@redhat.com>
@@ -32 +34 @@
-index 08e9e16ae5..87d7f7caa0 100644
+index 58c4f80927..690736b6d1 100644
@@ -35 +37 @@
-@@ -2855,7 +2855,7 @@ eth_igc_timesync_disable(struct rte_eth_dev *dev)
+@@ -2853,7 +2853,7 @@ eth_igc_timesync_disable(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5/hws: fix memory access in L3 decapsulation' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (114 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/igc: fix timesync disable' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'net/mlx5: fix sync flow meter action' " Xueming Li
` (7 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Alex Vesker; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=aeebcd33c070db072666eb2dbdf587f3c0319a49
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From aeebcd33c070db072666eb2dbdf587f3c0319a49 Mon Sep 17 00:00:00 2001
From: Alex Vesker <valex@nvidia.com>
Date: Fri, 22 Mar 2024 12:01:59 +0200
Subject: [PATCH] net/mlx5/hws: fix memory access in L3 decapsulation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1738b65b89ba5c80c452cb9acdaad4af892e802d ]
In case decapL3 action is created we would access header
data even in case the SHARED flag is not set, this would
lead to an invalid memory access.
Fixes: 3a6c50215c07 ("net/mlx5/hws: support multi-pattern")
Signed-off-by: Alex Vesker <valex@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_action.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..d21bca294c 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -1465,7 +1465,9 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
/* Create a full modify header action list in case shared */
mlx5dr_action_prepare_decap_l3_actions(hdrs->sz, mh_data, &num_of_actions);
- mlx5dr_action_prepare_decap_l3_data(hdrs->data, mh_data, num_of_actions);
+
+ if (action->flags & MLX5DR_ACTION_FLAG_SHARED)
+ mlx5dr_action_prepare_decap_l3_data(hdrs->data, mh_data, num_of_actions);
/* All DecapL3 cases require the same max arg size */
arg_obj = mlx5dr_arg_create_modify_header_arg(ctx,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.617256604 +0800
+++ 0117-net-mlx5-hws-fix-memory-access-in-L3-decapsulation.patch 2024-04-13 20:43:05.147753736 +0800
@@ -1 +1 @@
-From 1738b65b89ba5c80c452cb9acdaad4af892e802d Mon Sep 17 00:00:00 2001
+From aeebcd33c070db072666eb2dbdf587f3c0319a49 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1738b65b89ba5c80c452cb9acdaad4af892e802d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 084d4d606e..562fb5cbb4 100644
+index 862ee3e332..d21bca294c 100644
@@ -22 +24 @@
-@@ -1775,7 +1775,9 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
+@@ -1465,7 +1465,9 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/mlx5: fix sync flow meter action' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (115 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5/hws: fix memory access in L3 decapsulation' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:49 ` patch 'doc: fix typo in profiling guide' " Xueming Li
` (6 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=df1119d4a906b1644c231059579315d84ee9772c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From df1119d4a906b1644c231059579315d84ee9772c Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Tue, 19 Mar 2024 13:24:55 +0200
Subject: [PATCH] net/mlx5: fix sync flow meter action
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit dc7faa135188ede2455c66e79110538f1f92e08c ]
PMD implements sync METER flow action as async.
Queue selected for sync operations is `MLX5_HW_INV_QUEUE`.
That dummy queue value is translated into `CTRL_QUEUE_ID(priv)`.
Async job allocation converts INV queue into the real value, but
job release does not.
This patch fixes the queue value provided to `flow_hw_job_put()`.
This patch also removes dead code found in METER_MARK
destroy handler.
Coverity issue: 415804, 415806
Fixes: 4359d9d1f76b ("net/mlx5: fix sync meter processing in HWS")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 5 +----
drivers/net/mlx5/mlx5_flow_meter.c | 2 +-
2 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a54075ed7e..47fbbd0818 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10497,10 +10497,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
NULL, "Unable to wait for ASO meter CQE");
break;
}
- if (!job)
- mlx5_ipool_free(pool->idx_pool, idx);
- else
- aso = true;
+ aso = true;
break;
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_destroy(dev, handle, error);
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 249dc73691..7bf5018c70 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -1981,7 +1981,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
ret = mlx5_aso_meter_update_by_wqe(priv, MLX5_HW_INV_QUEUE, aso_mtr,
&priv->mtr_bulk, job, true);
if (ret) {
- flow_hw_job_put(priv, job, MLX5_HW_INV_QUEUE);
+ flow_hw_job_put(priv, job, CTRL_QUEUE_ID(priv));
return -rte_mtr_error_set(error, ENOTSUP,
RTE_MTR_ERROR_TYPE_UNSPECIFIED,
NULL, "Failed to create devx meter.");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.648623763 +0800
+++ 0118-net-mlx5-fix-sync-flow-meter-action.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From dc7faa135188ede2455c66e79110538f1f92e08c Mon Sep 17 00:00:00 2001
+From df1119d4a906b1644c231059579315d84ee9772c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit dc7faa135188ede2455c66e79110538f1f92e08c ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index 35f1ed7a03..9ebbe664d1 100644
+index a54075ed7e..47fbbd0818 100644
@@ -32 +34 @@
-@@ -11494,10 +11494,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
+@@ -10497,10 +10497,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
@@ -45 +47 @@
-index 4045c4c249..ca361f7efa 100644
+index 249dc73691..7bf5018c70 100644
@@ -48 +50 @@
-@@ -2265,7 +2265,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
+@@ -1981,7 +1981,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id,
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: fix typo in profiling guide' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (116 preceding siblings ...)
2024-04-13 12:49 ` patch 'net/mlx5: fix sync flow meter action' " Xueming Li
@ 2024-04-13 12:49 ` Xueming Li
2024-04-13 12:50 ` patch 'doc: fix typo in packet framework " Xueming Li
` (5 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:49 UTC (permalink / raw)
To: Emi Aoki; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e85092f875045a9212de39a05876925e166cf3ef
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e85092f875045a9212de39a05876925e166cf3ef Mon Sep 17 00:00:00 2001
From: Emi Aoki <embm29@gmail.com>
Date: Thu, 21 Mar 2024 19:02:25 -0400
Subject: [PATCH] doc: fix typo in profiling guide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a67ce452919a3381ccdc53af7bcc0196838b3586 ]
Caught by codespell.
Fixes: 9d5ba88c2d41 ("doc: add ARM profiling methods")
Signed-off-by: Emi Aoki <embm29@gmail.com>
---
.mailmap | 1 +
doc/guides/prog_guide/profile_app.rst | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 24c4ad3b85..32a69b6bd0 100644
--- a/.mailmap
+++ b/.mailmap
@@ -369,6 +369,7 @@ Elad Persiko <eladpe@mellanox.com>
Elena Agostini <eagostini@nvidia.com>
Eli Britstein <elibr@nvidia.com> <elibr@mellanox.com>
Elza Mathew <elza.mathew@intel.com>
+Emi Aoki <embm29@gmail.com>
Emma Finn <emma.finn@intel.com>
Emma Kenny <emma.kenny@intel.com>
Emmanuel Roullit <emmanuel.roullit@gmail.com>
diff --git a/doc/guides/prog_guide/profile_app.rst b/doc/guides/prog_guide/profile_app.rst
index 14292d4c25..a6b5fb4d5e 100644
--- a/doc/guides/prog_guide/profile_app.rst
+++ b/doc/guides/prog_guide/profile_app.rst
@@ -59,7 +59,7 @@ addition to the standard events, ``perf`` can be used to profile arm64
specific PMU (Performance Monitor Unit) events through raw events (``-e``
``-rXX``).
-For more derails refer to the
+For more details refer to the
`ARM64 specific PMU events enumeration <http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100095_0002_04_en/way1382543438508.html>`_.
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.677867325 +0800
+++ 0119-doc-fix-typo-in-profiling-guide.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From a67ce452919a3381ccdc53af7bcc0196838b3586 Mon Sep 17 00:00:00 2001
+From e85092f875045a9212de39a05876925e166cf3ef Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a67ce452919a3381ccdc53af7bcc0196838b3586 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index f53fb59513..67687d21b1 100644
+index 24c4ad3b85..32a69b6bd0 100644
@@ -21 +23 @@
-@@ -373,6 +373,7 @@ Elad Persiko <eladpe@mellanox.com>
+@@ -369,6 +369,7 @@ Elad Persiko <eladpe@mellanox.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'doc: fix typo in packet framework guide' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (117 preceding siblings ...)
2024-04-13 12:49 ` patch 'doc: fix typo in profiling guide' " Xueming Li
@ 2024-04-13 12:50 ` Xueming Li
2024-04-13 12:50 ` patch 'test/power: fix typo in error message' " Xueming Li
` (4 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:50 UTC (permalink / raw)
To: Flore Norceide; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ed3a625fe6ce5f6890578c9fa9385f624a459d0f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ed3a625fe6ce5f6890578c9fa9385f624a459d0f Mon Sep 17 00:00:00 2001
From: Flore Norceide <florestecien@gmail.com>
Date: Thu, 21 Mar 2024 19:03:52 -0400
Subject: [PATCH] doc: fix typo in packet framework guide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e4e1f2f7d24240d476cfa757e218d1532759b65d ]
Caught by codespell
Fixes: fc1f2750a3ec ("doc: programmers guide")
Signed-off-by: Flore Norceide <florestecien@gmail.com>
---
.mailmap | 1 +
doc/guides/prog_guide/packet_framework.rst | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 32a69b6bd0..1798486ef9 100644
--- a/.mailmap
+++ b/.mailmap
@@ -407,6 +407,7 @@ Fidaullah Noonari <fidaullah.noonari@emumba.com>
Fiona Trahe <fiona.trahe@intel.com>
Flavia Musatescu <flavia.musatescu@intel.com>
Flavio Leitner <fbl@redhat.com> <fbl@sysclose.org>
+Flore Norceide <florestecien@gmail.com>
Forrest Shi <xuelin.shi@nxp.com>
Francesco Mancino <francesco.mancino@tutus.se>
Francesco Santoro <francesco.santoro@6wind.com>
diff --git a/doc/guides/prog_guide/packet_framework.rst b/doc/guides/prog_guide/packet_framework.rst
index ebc69d8c3e..9987ead6c5 100644
--- a/doc/guides/prog_guide/packet_framework.rst
+++ b/doc/guides/prog_guide/packet_framework.rst
@@ -509,7 +509,7 @@ the number of L2 or L3 cache memory misses is greatly reduced, hence one of the
This is because the cost of L2/L3 cache memory miss on memory read accesses is high, as usually due to data dependency between instructions,
the CPU execution units have to stall until the read operation is completed from L3 cache memory or external DRAM memory.
By using prefetch instructions, the latency of memory read accesses is hidden,
-provided that it is preformed early enough before the respective data structure is actually used.
+provided that it is performed early enough before the respective data structure is actually used.
By splitting the processing into several stages that are executed on different packets (the packets from the input burst are interlaced),
enough work is created to allow the prefetch instructions to complete successfully (before the prefetched data structures are actually accessed) and
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.700743595 +0800
+++ 0120-doc-fix-typo-in-packet-framework-guide.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From e4e1f2f7d24240d476cfa757e218d1532759b65d Mon Sep 17 00:00:00 2001
+From ed3a625fe6ce5f6890578c9fa9385f624a459d0f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e4e1f2f7d24240d476cfa757e218d1532759b65d ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 67687d21b1..ac50264e66 100644
+index 32a69b6bd0..1798486ef9 100644
@@ -21 +23 @@
-@@ -411,6 +411,7 @@ Fidaullah Noonari <fidaullah.noonari@emumba.com>
+@@ -407,6 +407,7 @@ Fidaullah Noonari <fidaullah.noonari@emumba.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test/power: fix typo in error message' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (118 preceding siblings ...)
2024-04-13 12:50 ` patch 'doc: fix typo in packet framework " Xueming Li
@ 2024-04-13 12:50 ` Xueming Li
2024-04-13 12:50 ` patch 'test/cfgfile: fix typo in error messages' " Xueming Li
` (3 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:50 UTC (permalink / raw)
To: Fidel Castro; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f48e923b46ad51ce27c3da06827c297b04de0439
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f48e923b46ad51ce27c3da06827c297b04de0439 Mon Sep 17 00:00:00 2001
From: Fidel Castro <fidelcastro.s@hotmail.com>
Date: Thu, 21 Mar 2024 18:35:52 -0400
Subject: [PATCH] test/power: fix typo in error message
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4e335241a44cf8e3a34c0549293ca73d272b6c05 ]
Caught by codespell.
Fixes: 2653bee888b4 ("test/power: check all environment types")
Signed-off-by: Fidel Castro <fidelcastro.s@hotmail.com>
---
.mailmap | 1 +
app/test/test_power.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 1798486ef9..9654531a59 100644
--- a/.mailmap
+++ b/.mailmap
@@ -404,6 +404,7 @@ Fengtian Guo <fengtian.guo@6wind.com>
Ferdinand Thiessen <rpm@fthiessen.de>
Ferruh Yigit <ferruh.yigit@amd.com> <ferruh.yigit@intel.com> <ferruh.yigit@xilinx.com> <ferruhy@gmail.com>
Fidaullah Noonari <fidaullah.noonari@emumba.com>
+Fidel Castro <fidelcastro.s@hotmail.com>
Fiona Trahe <fiona.trahe@intel.com>
Flavia Musatescu <flavia.musatescu@intel.com>
Flavio Leitner <fbl@redhat.com> <fbl@sysclose.org>
diff --git a/app/test/test_power.c b/app/test/test_power.c
index f1e80299d3..403adc22d6 100644
--- a/app/test/test_power.c
+++ b/app/test/test_power.c
@@ -143,7 +143,7 @@ test_power(void)
/* Test setting a valid environment */
ret = rte_power_set_env(envs[i]);
if (ret != 0) {
- printf("Unexpectedly unsucceeded on setting a valid environment\n");
+ printf("Unexpectedly unsuccessful on setting a valid environment\n");
return -1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.727263560 +0800
+++ 0121-test-power-fix-typo-in-error-message.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From 4e335241a44cf8e3a34c0549293ca73d272b6c05 Mon Sep 17 00:00:00 2001
+From f48e923b46ad51ce27c3da06827c297b04de0439 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4e335241a44cf8e3a34c0549293ca73d272b6c05 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index ac50264e66..6bb0855236 100644
+index 1798486ef9..9654531a59 100644
@@ -21 +23 @@
-@@ -408,6 +408,7 @@ Fengtian Guo <fengtian.guo@6wind.com>
+@@ -404,6 +404,7 @@ Fengtian Guo <fengtian.guo@6wind.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'test/cfgfile: fix typo in error messages' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (119 preceding siblings ...)
2024-04-13 12:50 ` patch 'test/power: fix typo in error message' " Xueming Li
@ 2024-04-13 12:50 ` Xueming Li
2024-04-13 12:50 ` patch 'examples/ipsec-secgw: fix typo in error message' " Xueming Li
` (2 subsequent siblings)
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:50 UTC (permalink / raw)
To: Holly Nichols; +Cc: Vinh Tran, Thomas Monjalon, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cf631810bf09ef47221ac6f3981497f20e2621fc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cf631810bf09ef47221ac6f3981497f20e2621fc Mon Sep 17 00:00:00 2001
From: Holly Nichols <hollynichols04@gmail.com>
Date: Thu, 21 Mar 2024 19:05:29 -0400
Subject: [PATCH] test/cfgfile: fix typo in error messages
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fbca1ad85f7725b876d032cb732b3332d247e1f4 ]
Caught by codespell.
Fixes: c54e7234bc9e ("test/cfgfile: add basic unit tests")
Signed-off-by: Holly Nichols <hollynichols04@gmail.com>
Signed-off-by: Vinh Tran <vinh.t.tran10@gmail.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
.mailmap | 2 ++
app/test/test_cfgfile.c | 8 ++++----
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/.mailmap b/.mailmap
index 9654531a59..6e34fe6566 100644
--- a/.mailmap
+++ b/.mailmap
@@ -523,6 +523,7 @@ Hiral Shah <hshah@marvell.com>
Hiroki Shirokura <slank.dev@gmail.com>
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Hiroyuki Mikita <h.mikita89@gmail.com>
+Holly Nichols <hollynichols04@gmail.com>
Hongbo Zheng <zhenghongbo3@huawei.com>
Hongjun Ni <hongjun.ni@intel.com>
Hongzhi Guo <guohongzhi1@huawei.com>
@@ -1499,6 +1500,7 @@ Vincent Guo <guopengfei160@163.com>
Vincent Jardin <vincent.jardin@6wind.com>
Vincent Li <vincent.mc.li@gmail.com>
Vincent S. Cojot <vcojot@redhat.com>
+Vinh Tran <vinh.t.tran10@gmail.com>
Vipin Varghese <vipin.varghese@amd.com> <vipin.varghese@intel.com>
Vipul Ashri <vipul.ashri@oracle.com>
Visa Hankala <visa@hankala.org>
diff --git a/app/test/test_cfgfile.c b/app/test/test_cfgfile.c
index 2f596affee..a5e3d8699c 100644
--- a/app/test/test_cfgfile.c
+++ b/app/test/test_cfgfile.c
@@ -168,7 +168,7 @@ test_cfgfile_invalid_section_header(void)
struct rte_cfgfile *cfgfile;
cfgfile = rte_cfgfile_load(CFG_FILES_ETC "/invalid_section.ini", 0);
- TEST_ASSERT_NULL(cfgfile, "Expected failured did not occur");
+ TEST_ASSERT_NULL(cfgfile, "Expected failure did not occur");
return 0;
}
@@ -185,7 +185,7 @@ test_cfgfile_invalid_comment(void)
cfgfile = rte_cfgfile_load_with_params(CFG_FILES_ETC "/sample2.ini", 0,
¶ms);
- TEST_ASSERT_NULL(cfgfile, "Expected failured did not occur");
+ TEST_ASSERT_NULL(cfgfile, "Expected failure did not occur");
return 0;
}
@@ -196,7 +196,7 @@ test_cfgfile_invalid_key_value_pair(void)
struct rte_cfgfile *cfgfile;
cfgfile = rte_cfgfile_load(CFG_FILES_ETC "/empty_key_value.ini", 0);
- TEST_ASSERT_NULL(cfgfile, "Expected failured did not occur");
+ TEST_ASSERT_NULL(cfgfile, "Expected failure did not occur");
return 0;
}
@@ -236,7 +236,7 @@ test_cfgfile_missing_section(void)
struct rte_cfgfile *cfgfile;
cfgfile = rte_cfgfile_load(CFG_FILES_ETC "/missing_section.ini", 0);
- TEST_ASSERT_NULL(cfgfile, "Expected failured did not occur");
+ TEST_ASSERT_NULL(cfgfile, "Expected failure did not occur");
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.752317327 +0800
+++ 0122-test-cfgfile-fix-typo-in-error-messages.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From fbca1ad85f7725b876d032cb732b3332d247e1f4 Mon Sep 17 00:00:00 2001
+From cf631810bf09ef47221ac6f3981497f20e2621fc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fbca1ad85f7725b876d032cb732b3332d247e1f4 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 6bb0855236..f3a25a7402 100644
+index 9654531a59..6e34fe6566 100644
@@ -23 +25 @@
-@@ -528,6 +528,7 @@ Hiral Shah <hshah@marvell.com>
+@@ -523,6 +523,7 @@ Hiral Shah <hshah@marvell.com>
@@ -31 +33 @@
-@@ -1513,6 +1514,7 @@ Vincent Guo <guopengfei160@163.com>
+@@ -1499,6 +1500,7 @@ Vincent Guo <guopengfei160@163.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'examples/ipsec-secgw: fix typo in error message' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (120 preceding siblings ...)
2024-04-13 12:50 ` patch 'test/cfgfile: fix typo in error messages' " Xueming Li
@ 2024-04-13 12:50 ` Xueming Li
2024-04-13 12:50 ` patch 'dts: strip whitespaces from stdout and stderr' " Xueming Li
2024-04-13 12:50 ` patch 'net/ena/base: fix metrics excessive memory consumption' " Xueming Li
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:50 UTC (permalink / raw)
To: Masoumeh Farhadi Nia; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=abc68161348cb4242069279f9d068044d9219442
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From abc68161348cb4242069279f9d068044d9219442 Mon Sep 17 00:00:00 2001
From: Masoumeh Farhadi Nia <masoumeh.farhadinia@gmail.com>
Date: Thu, 21 Mar 2024 18:41:35 -0400
Subject: [PATCH] examples/ipsec-secgw: fix typo in error message
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0dc91038201be67714f92a0fd0e0e7ed4f25c6a0 ]
Caught by codespell.
Fixes: 7622291b641d ("examples/ipsec-secgw: allow to specify neighbour MAC address")
Signed-off-by: Masoumeh Farhadi Nia <masoumeh.farhadinia@gmail.com>
---
.mailmap | 1 +
examples/ipsec-secgw/parser.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 6e34fe6566..f3b429b747 100644
--- a/.mailmap
+++ b/.mailmap
@@ -901,6 +901,7 @@ Martin Weiser <martin.weiser@allegro-packets.com>
Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@intel.com> <martyna.szapar@intel.com>
Maryam Tahhan <maryam.tahhan@intel.com>
Masoud Hasanifard <masoudhasanifard@gmail.com>
+Masoumeh Farhadi Nia <masoumeh.farhadinia@gmail.com>
Matan Azrad <matan@nvidia.com> <matan@mellanox.com>
Matej Vido <matejvido@gmail.com> <vido@cesnet.cz>
Mateusz Kowalski <mateusz.kowalski@intel.com>
diff --git a/examples/ipsec-secgw/parser.c b/examples/ipsec-secgw/parser.c
index 98f8176651..2bd6df335b 100644
--- a/examples/ipsec-secgw/parser.c
+++ b/examples/ipsec-secgw/parser.c
@@ -388,7 +388,7 @@ cfg_parse_neigh(void *parsed_result, __rte_unused struct cmdline *cl,
rc = parse_mac(res->mac, &mac);
APP_CHECK(rc == 0, st, "invalid ether addr:%s", res->mac);
rc = add_dst_ethaddr(res->port, &mac);
- APP_CHECK(rc == 0, st, "invalid port numer:%hu", res->port);
+ APP_CHECK(rc == 0, st, "invalid port number:%hu", res->port);
if (st->status < 0)
return;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.783850286 +0800
+++ 0123-examples-ipsec-secgw-fix-typo-in-error-message.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From 0dc91038201be67714f92a0fd0e0e7ed4f25c6a0 Mon Sep 17 00:00:00 2001
+From abc68161348cb4242069279f9d068044d9219442 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0dc91038201be67714f92a0fd0e0e7ed4f25c6a0 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index f3a25a7402..3843868716 100644
+index 6e34fe6566..f3b429b747 100644
@@ -21 +23 @@
-@@ -908,6 +908,7 @@ Martin Weiser <martin.weiser@allegro-packets.com>
+@@ -901,6 +901,7 @@ Martin Weiser <martin.weiser@allegro-packets.com>
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'dts: strip whitespaces from stdout and stderr' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (121 preceding siblings ...)
2024-04-13 12:50 ` patch 'examples/ipsec-secgw: fix typo in error message' " Xueming Li
@ 2024-04-13 12:50 ` Xueming Li
2024-04-13 12:50 ` patch 'net/ena/base: fix metrics excessive memory consumption' " Xueming Li
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:50 UTC (permalink / raw)
To: Juraj Linkeš; +Cc: Jeremy Spewock, Patrick Robb, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a20a3c1129aff09b2058b37d3f463e4879e46b74
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a20a3c1129aff09b2058b37d3f463e4879e46b74 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Juraj=20Linke=C5=A1?= <juraj.linkes@pantheon.tech>
Date: Tue, 13 Feb 2024 12:14:39 +0100
Subject: [PATCH] dts: strip whitespaces from stdout and stderr
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
There could be a newline at the end of stdout or stderr of a remotely
executed command. These cause issues when used later, such as when
joining paths from such commands - a newline in the middle of a path is
not valid.
Fixes: ad80f550dbc5 ("dts: add SSH command verification")
Cc: stable@dpdk.org
Signed-off-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Jeremy Spewock <jspewock@iol.unh.edu>
Acked-by: Patrick Robb <probb@iol.unh.edu>
---
.../remote_session/remote/remote_session.py | 24 +++++++++++++++----
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/dts/framework/remote_session/remote/remote_session.py b/dts/framework/remote_session/remote/remote_session.py
index 719f7d1ef7..68894a9686 100644
--- a/dts/framework/remote_session/remote/remote_session.py
+++ b/dts/framework/remote_session/remote/remote_session.py
@@ -3,8 +3,8 @@
# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
# Copyright(c) 2022-2023 University of New Hampshire
-import dataclasses
from abc import ABC, abstractmethod
+from dataclasses import InitVar, dataclass, field
from pathlib import PurePath
from framework.config import NodeConfiguration
@@ -13,7 +13,7 @@ from framework.logger import DTSLOG
from framework.settings import SETTINGS
-@dataclasses.dataclass(slots=True, frozen=True)
+@dataclass(slots=True, frozen=True)
class CommandResult:
"""
The result of remote execution of a command.
@@ -21,9 +21,25 @@ class CommandResult:
name: str
command: str
- stdout: str
- stderr: str
+ init_stdout: InitVar[str]
+ init_stderr: InitVar[str]
return_code: int
+ stdout: str = field(init=False)
+ stderr: str = field(init=False)
+
+ def __post_init__(self, init_stdout: str, init_stderr: str) -> None:
+ """Strip the whitespaces from stdout and stderr.
+
+ The generated __init__ method uses object.__setattr__() when the dataclass is frozen,
+ so that's what we use here as well.
+
+ In order to get access to dataclass fields in the __post_init__ method,
+ we have to type them as InitVars. These InitVars are included in the __init__ method's
+ signature, so we have to exclude the actual stdout and stderr fields
+ from the __init__ method's signature, so that we have the proper number of arguments.
+ """
+ object.__setattr__(self, "stdout", init_stdout.strip())
+ object.__setattr__(self, "stderr", init_stderr.strip())
def __str__(self) -> str:
return (
--
2.34.1
^ permalink raw reply [flat|nested] 263+ messages in thread
* patch 'net/ena/base: fix metrics excessive memory consumption' has been queued to stable release 23.11.1
2024-04-13 12:48 ` patch " Xueming Li
` (122 preceding siblings ...)
2024-04-13 12:50 ` patch 'dts: strip whitespaces from stdout and stderr' " Xueming Li
@ 2024-04-13 12:50 ` Xueming Li
123 siblings, 0 replies; 263+ messages in thread
From: Xueming Li @ 2024-04-13 12:50 UTC (permalink / raw)
To: Shai Brandes; +Cc: Amit Bernstein, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.1
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 04/15/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cfa8a4cb909e07cbb34941358f4d912c879dca34
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cfa8a4cb909e07cbb34941358f4d912c879dca34 Mon Sep 17 00:00:00 2001
From: Shai Brandes <shaibran@amazon.com>
Date: Mon, 8 Apr 2024 15:15:25 +0300
Subject: [PATCH] net/ena/base: fix metrics excessive memory consumption
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c8a1898f82f8c04cbe1d3e2d0eec0705386c23f7 ]
The driver accidentally allocates a huge memory
buffer for the customer metrics because it uses
an uninitialized variable for the buffer length.
This can lead to excessive memory footprint for
the driver which can even fail to initialize in
case of insufficient memory.
Bugzilla ID: 1400
Fixes: f73f53f7dc7a ("net/ena: upgrade HAL")
Cc: stable@dpdk.org
Signed-off-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
---
drivers/net/ena/base/ena_com.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c
index 57ccde9545..2f438597e6 100644
--- a/drivers/net/ena/base/ena_com.c
+++ b/drivers/net/ena/base/ena_com.c
@@ -3141,16 +3141,18 @@ int ena_com_allocate_debug_area(struct ena_com_dev *ena_dev,
int ena_com_allocate_customer_metrics_buffer(struct ena_com_dev *ena_dev)
{
struct ena_customer_metrics *customer_metrics = &ena_dev->customer_metrics;
+ customer_metrics->buffer_len = ENA_CUSTOMER_METRICS_BUFFER_SIZE;
+ customer_metrics->buffer_virt_addr = NULL;
ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev,
customer_metrics->buffer_len,
customer_metrics->buffer_virt_addr,
customer_metrics->buffer_dma_addr,
customer_metrics->buffer_dma_handle);
- if (unlikely(customer_metrics->buffer_virt_addr == NULL))
+ if (unlikely(customer_metrics->buffer_virt_addr == NULL)) {
+ customer_metrics->buffer_len = 0;
return ENA_COM_NO_MEM;
-
- customer_metrics->buffer_len = ENA_CUSTOMER_METRICS_BUFFER_SIZE;
+ }
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-04-13 20:43:08.830253026 +0800
+++ 0125-net-ena-base-fix-metrics-excessive-memory-consumptio.patch 2024-04-13 20:43:05.157753723 +0800
@@ -1 +1 @@
-From c8a1898f82f8c04cbe1d3e2d0eec0705386c23f7 Mon Sep 17 00:00:00 2001
+From cfa8a4cb909e07cbb34941358f4d912c879dca34 Mon Sep 17 00:00:00 2001
@@ -3,2 +3,3 @@
-Date: Tue, 12 Mar 2024 20:07:09 +0200
-Subject: [PATCH] net/ena: improve style and readability
+Date: Mon, 8 Apr 2024 15:15:25 +0300
+Subject: [PATCH] net/ena/base: fix metrics excessive memory consumption
+Cc: Xueming Li <xuemingl@nvidia.com>
@@ -6,2 +7,12 @@
-This patch makes several changes to improve
-the style and readability of the code.
+[ upstream commit c8a1898f82f8c04cbe1d3e2d0eec0705386c23f7 ]
+
+The driver accidentally allocates a huge memory
+buffer for the customer metrics because it uses
+an uninitialized variable for the buffer length.
+This can lead to excessive memory footprint for
+the driver which can even fail to initialize in
+case of insufficient memory.
+
+Bugzilla ID: 1400
+Fixes: f73f53f7dc7a ("net/ena: upgrade HAL")
+Cc: stable@dpdk.org
@@ -12,3 +23,2 @@
- drivers/net/ena/base/ena_com.c | 13 +++++--------
- drivers/net/ena/base/ena_plat_dpdk.h | 2 +-
- 2 files changed, 6 insertions(+), 9 deletions(-)
+ drivers/net/ena/base/ena_com.c | 8 +++++---
+ 1 file changed, 5 insertions(+), 3 deletions(-)
@@ -17 +27 @@
-index b98540ba63..2db21e7895 100644
+index 57ccde9545..2f438597e6 100644
@@ -20,28 +30,2 @@
-@@ -1914,15 +1914,14 @@ int ena_com_phc_get_timestamp(struct ena_com_dev *ena_dev, u64 *timestamp)
-
- /* PHC is in active state, update statistics according to req_id and error_flags */
- if ((READ_ONCE16(read_resp->req_id) != phc->req_id) ||
-- (read_resp->error_flags & ENA_PHC_ERROR_FLAGS)) {
-+ (read_resp->error_flags & ENA_PHC_ERROR_FLAGS))
- /* Device didn't update req_id during blocking time or timestamp is invalid,
- * this indicates on a device error
- */
- phc->stats.phc_err++;
-- } else {
-+ else
- /* Device updated req_id during blocking time with valid timestamp */
- phc->stats.phc_exp++;
-- }
- }
-
- /* Setting relative timeouts */
-@@ -2431,7 +2430,7 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *ena_dev, void *data)
- timestamp = (u64)aenq_common->timestamp_low |
- ((u64)aenq_common->timestamp_high << 32);
-
-- ena_trc_dbg(ena_dev, "AENQ! Group[%x] Syndrome[%x] timestamp: [%" ENA_PRIU64 "s]\n",
-+ ena_trc_dbg(ena_dev, "AENQ! Group[%x] Syndrome[%x] timestamp: [%" ENA_PRIu64 "s]\n",
- aenq_common->group,
- aenq_common->syndrome,
- timestamp);
-@@ -3233,16 +3232,15 @@ int ena_com_allocate_customer_metrics_buffer(struct ena_com_dev *ena_dev)
+@@ -3141,16 +3141,18 @@ int ena_com_allocate_debug_area(struct ena_com_dev *ena_dev,
+ int ena_com_allocate_customer_metrics_buffer(struct ena_com_dev *ena_dev)
@@ -50 +33,0 @@
-
@@ -51,0 +35,2 @@
++ customer_metrics->buffer_virt_addr = NULL;
+
@@ -58 +43,2 @@
-+ if (unlikely(!customer_metrics->buffer_virt_addr))
++ if (unlikely(customer_metrics->buffer_virt_addr == NULL)) {
++ customer_metrics->buffer_len = 0;
@@ -60,2 +45,0 @@
-
-- customer_metrics->buffer_len = ENA_CUSTOMER_METRICS_BUFFER_SIZE;
@@ -63,2 +47,2 @@
- return 0;
- }
+- customer_metrics->buffer_len = ENA_CUSTOMER_METRICS_BUFFER_SIZE;
++ }
@@ -66,6 +50 @@
-@@ -3285,7 +3283,6 @@ void ena_com_delete_customer_metrics_buffer(struct ena_com_dev *ena_dev)
- customer_metrics->buffer_dma_addr,
- customer_metrics->buffer_dma_handle);
- customer_metrics->buffer_virt_addr = NULL;
-- customer_metrics->buffer_len = 0;
- }
+ return 0;
@@ -73,14 +51,0 @@
-
-diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h
-index bb21e1bf01..9e365b0f3b 100644
---- a/drivers/net/ena/base/ena_plat_dpdk.h
-+++ b/drivers/net/ena/base/ena_plat_dpdk.h
-@@ -40,7 +40,7 @@ typedef uint64_t dma_addr_t;
- #define ETIME ETIMEDOUT
- #endif
-
--#define ENA_PRIU64 PRIu64
-+#define ENA_PRIu64 PRIu64
- #define ena_atomic32_t rte_atomic32_t
- #define ena_mem_handle_t const struct rte_memzone *
-
^ permalink raw reply [flat|nested] 263+ messages in thread