CI found in the logic of 'nfp_aesgcm_iv_update()', the variable 'cfg_iv' may used uninitialized in some case. Coverity issue: 415808 Fixes: 36361ca7fea2 ("net/nfp: fix data endianness problem") Cc: shihong.wang@corigine.com Cc: stable@dpdk.org Signed-off-by: Chaoyong He <chaoyong.he@corigine.com> Reviewed-by: Long Wu <long.wu@corigine.com> Reviewed-by: Peng Zhang <peng.zhang@corigine.com> --- drivers/net/nfp/nfp_ipsec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/nfp/nfp_ipsec.c b/drivers/net/nfp/nfp_ipsec.c index 205d1d594c..647bc2bb6d 100644 --- a/drivers/net/nfp/nfp_ipsec.c +++ b/drivers/net/nfp/nfp_ipsec.c @@ -526,7 +526,7 @@ nfp_aesgcm_iv_update(struct ipsec_add_sa *cfg, char *iv_b; char *iv_str; const rte_be32_t *iv_value; - uint8_t cfg_iv[NFP_ESP_IV_LENGTH]; + uint8_t cfg_iv[NFP_ESP_IV_LENGTH] = {}; iv_str = strdup(iv_string); if (iv_str == NULL) { -- 2.39.1
> From: Sivaprasad Tummala [mailto:sivaprasad.tummala@amd.com]
> Sent: Monday, 18 March 2024 18.32
>
> Currently the config option allows lcore IDs up to 255,
> irrespective of RTE_MAX_LCORES and needs to be fixed.
>
> The patch allows config options based on DPDK config.
>
> Fixes: af75078fece3 ("first public release")
> Cc: stable@dpdk.org
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> ---
I suggest you update the description of the individual patches too, like you did for patch 0/6.
E.g. this patch not only fixes the lcore_id, but also the queue_id type size.
For the series,
Acked-by: Morten Brørup <mb@smartsharesystems.com>
From: Long Wu <long.wu@corigine.com> The PF representor port's queue is different from the VF/physical representor port. So the release process in close port should be different too. Fixes: 39b3951 ("net/nfp: fix resource leak for exit of flower firmware") Cc: chaoyong.he@corigine.com Cc: stable@dpdk.org Signed-off-by: Long Wu <long.wu@corigine.com> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com> Reviewed-by: Peng Zhang <peng.zhang@corigine.com> --- .../net/nfp/flower/nfp_flower_representor.c | 69 ++++++++++++++----- 1 file changed, 50 insertions(+), 19 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index f26bf83edb..c4f33cbb2e 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -304,6 +304,54 @@ nfp_flower_repr_tx_burst(void *tx_queue, return sent; } +static void +nfp_flower_repr_free_queue(struct nfp_flower_representor *repr) +{ + uint16_t i; + struct rte_eth_dev *eth_dev = repr->eth_dev; + + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + rte_free(eth_dev->data->tx_queues[i]); + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) + rte_free(eth_dev->data->rx_queues[i]); +} + +static void +nfp_flower_pf_repr_close_queue(struct nfp_flower_representor *repr) +{ + struct rte_eth_dev *eth_dev = repr->eth_dev; + + /* + * We assume that the DPDK application is stopping all the + * threads/queues before calling the device close function. + */ + nfp_net_disable_queues(eth_dev); + + /* Clear queues */ + nfp_net_close_tx_queue(eth_dev); + nfp_net_close_rx_queue(eth_dev); +} + +static void +nfp_flower_repr_close_queue(struct nfp_flower_representor *repr) +{ + switch (repr->repr_type) { + case NFP_REPR_TYPE_PHYS_PORT: + nfp_flower_repr_free_queue(repr); + break; + case NFP_REPR_TYPE_PF: + nfp_flower_pf_repr_close_queue(repr); + break; + case NFP_REPR_TYPE_VF: + nfp_flower_repr_free_queue(repr); + break; + default: + PMD_DRV_LOG(ERR, "Unsupported repr port type."); + break; + } +} + static int nfp_flower_repr_uninit(struct rte_eth_dev *eth_dev) { @@ -348,8 +396,6 @@ nfp_flower_repr_dev_close(struct rte_eth_dev *dev) uint16_t i; struct nfp_net_hw *hw; struct nfp_pf_dev *pf_dev; - struct nfp_net_txq *this_tx_q; - struct nfp_net_rxq *this_rx_q; struct nfp_flower_representor *repr; struct nfp_app_fw_flower *app_fw_flower; @@ -361,26 +407,11 @@ nfp_flower_repr_dev_close(struct rte_eth_dev *dev) hw = app_fw_flower->pf_hw; pf_dev = hw->pf_dev; - /* - * We assume that the DPDK application is stopping all the - * threads/queues before calling the device close function. - */ - nfp_net_disable_queues(dev); - - /* Clear queues */ - for (i = 0; i < dev->data->nb_tx_queues; i++) { - this_tx_q = dev->data->tx_queues[i]; - nfp_net_reset_tx_queue(this_tx_q); - } - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - this_rx_q = dev->data->rx_queues[i]; - nfp_net_reset_rx_queue(this_rx_q); - } - if (pf_dev->app_fw_id != NFP_APP_FW_FLOWER_NIC) return -EINVAL; + nfp_flower_repr_close_queue(repr); + nfp_flower_repr_free(repr, repr->repr_type); for (i = 0; i < MAX_FLOWER_VFS; i++) { -- 2.39.1
Head move should happen after the core id check, otherwise source node will be missed. Fixes: 35dfd9b9fd85 ("graph: introduce graph walk by cross-core dispatch") Cc: stable@dpdk.org Signed-off-by: Jingjing Wu <jingjing.wu@intel.com> --- lib/graph/rte_graph_model_mcore_dispatch.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/graph/rte_graph_model_mcore_dispatch.h b/lib/graph/rte_graph_model_mcore_dispatch.h index 75ec388cad..b96469296e 100644 --- a/lib/graph/rte_graph_model_mcore_dispatch.h +++ b/lib/graph/rte_graph_model_mcore_dispatch.h @@ -97,12 +97,12 @@ rte_graph_walk_mcore_dispatch(struct rte_graph *graph) __rte_graph_mcore_dispatch_sched_wq_process(graph); while (likely(head != graph->tail)) { - node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); + node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head]); /* skip the src nodes which not bind with current worker */ if ((int32_t)head < 0 && node->dispatch.lcore_id != graph->dispatch.lcore_id) continue; - + head++; /* Schedule the node until all task/objs are done */ if (node->dispatch.lcore_id != RTE_MAX_LCORE && graph->dispatch.lcore_id != node->dispatch.lcore_id && -- 2.34.1
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: de3cfa2c9823 ("sched: initial import") Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> --- examples/qos_sched/args.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/qos_sched/args.c b/examples/qos_sched/args.c index 8d61d3e454..886542b3c1 100644 --- a/examples/qos_sched/args.c +++ b/examples/qos_sched/args.c @@ -184,10 +184,10 @@ app_parse_flow_conf(const char *conf_str) pconf->rx_port = vals[0]; pconf->tx_port = vals[1]; - pconf->rx_core = (uint8_t)vals[2]; - pconf->wt_core = (uint8_t)vals[3]; + pconf->rx_core = vals[2]; + pconf->wt_core = vals[3]; if (ret == 5) - pconf->tx_core = (uint8_t)vals[4]; + pconf->tx_core = vals[4]; else pconf->tx_core = pconf->wt_core; -- 2.25.1
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: 0e8f47491f09 ("examples/vm_power: add command to query CPU frequency") Cc: marcinx.hajkowski@intel.com Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> --- examples/vm_power_manager/guest_cli/vm_power_cli_guest.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c b/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c index 94bfbbaf78..5eddb47847 100644 --- a/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c +++ b/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c @@ -401,7 +401,7 @@ check_response_cmd(unsigned int lcore_id, int *result) struct cmd_set_cpu_freq_result { cmdline_fixed_string_t set_cpu_freq; - uint8_t lcore_id; + uint32_t lcore_id; cmdline_fixed_string_t cmd; }; @@ -444,7 +444,7 @@ cmdline_parse_token_string_t cmd_set_cpu_freq = set_cpu_freq, "set_cpu_freq"); cmdline_parse_token_num_t cmd_set_cpu_freq_core_num = TOKEN_NUM_INITIALIZER(struct cmd_set_cpu_freq_result, - lcore_id, RTE_UINT8); + lcore_id, RTE_UINT32); cmdline_parse_token_string_t cmd_set_cpu_freq_cmd_cmd = TOKEN_STRING_INITIALIZER(struct cmd_set_cpu_freq_result, cmd, "up#down#min#max#enable_turbo#disable_turbo"); -- 2.25.1
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: d299106e8e31 ("examples/ipsec-secgw: add IPsec sample application") Cc: sergio.gonzalez.monroy@intel.com Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- examples/ipsec-secgw/event_helper.h | 2 +- examples/ipsec-secgw/ipsec-secgw.c | 37 +++++++++++++++-------------- examples/ipsec-secgw/ipsec.c | 2 +- examples/ipsec-secgw/ipsec.h | 6 ++--- examples/ipsec-secgw/ipsec_worker.c | 10 ++++---- 5 files changed, 28 insertions(+), 29 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index dfb81bfcf1..be635685b4 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -102,7 +102,7 @@ struct eh_event_link_info { /**< Event port ID */ uint8_t eventq_id; /**< Event queue to be linked to the port */ - uint8_t lcore_id; + uint32_t lcore_id; /**< Lcore to be polling on this port */ }; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 45a303850d..dc7491a2b9 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -220,8 +220,8 @@ static const char *cfgfile; struct lcore_params { uint16_t port_id; - uint8_t queue_id; - uint8_t lcore_id; + uint16_t queue_id; + uint32_t lcore_id; } __rte_cache_aligned; static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; @@ -695,8 +695,7 @@ ipsec_poll_mode_worker(void) struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; - uint16_t i, nb_rx, portid; - uint8_t queueid; + uint16_t i, nb_rx, portid, queueid; struct lcore_conf *qconf; int32_t rc, socket_id; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) @@ -743,7 +742,7 @@ ipsec_poll_mode_worker(void) portid = rxql[i].port_id; queueid = rxql[i].queue_id; RTE_LOG(INFO, IPSEC, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } @@ -788,8 +787,7 @@ int check_flow_params(uint16_t fdir_portid, uint8_t fdir_qid) { uint16_t i; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; for (i = 0; i < nb_lcore_params; ++i) { portid = lcore_params_array[i].port_id; @@ -809,7 +807,7 @@ check_flow_params(uint16_t fdir_portid, uint8_t fdir_qid) static int32_t check_poll_mode_params(struct eh_conf *eh_conf) { - uint8_t lcore; + uint32_t lcore; uint16_t portid; uint16_t i; int32_t socket_id; @@ -828,13 +826,13 @@ check_poll_mode_params(struct eh_conf *eh_conf) for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; if (!rte_lcore_is_enabled(lcore)) { - printf("error: lcore %hhu is not enabled in " + printf("error: lcore %u is not enabled in " "lcore mask\n", lcore); return -1; } socket_id = rte_lcore_to_socket_id(lcore); if (socket_id != 0 && numa_on == 0) { - printf("warning: lcore %hhu is on socket %d " + printf("warning: lcore %u is on socket %d " "with numa off\n", lcore, socket_id); } @@ -851,7 +849,7 @@ check_poll_mode_params(struct eh_conf *eh_conf) return 0; } -static uint8_t +static uint16_t get_port_nb_rx_queues(const uint16_t port) { int32_t queue = -1; @@ -862,14 +860,14 @@ get_port_nb_rx_queues(const uint16_t port) lcore_params[i].queue_id > queue) queue = lcore_params[i].queue_id; } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int32_t init_lcore_rx_queues(void) { uint16_t i, nb_rx_queue; - uint8_t lcore; + uint32_t lcore; for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; @@ -1050,6 +1048,8 @@ parse_config(const char *q_arg) char *str_fld[_NUM_FLD]; int32_t i; uint32_t size; + uint32_t max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, + USHRT_MAX, RTE_MAX_LCORE}; nb_lcore_params = 0; @@ -1070,7 +1070,7 @@ parse_config(const char *q_arg) for (i = 0; i < _NUM_FLD; i++) { errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) + if (errno != 0 || end == str_fld[i] || int_fld[i] > max_fld[i]) return -1; } if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -1079,11 +1079,11 @@ parse_config(const char *q_arg) return -1; } lcore_params_array[nb_lcore_params].port_id = - (uint8_t)int_fld[FLD_PORT]; + (uint16_t)int_fld[FLD_PORT]; lcore_params_array[nb_lcore_params].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; lcore_params_array[nb_lcore_params].lcore_id = - (uint8_t)int_fld[FLD_LCORE]; + (uint32_t)int_fld[FLD_LCORE]; ++nb_lcore_params; } lcore_params = lcore_params_array; @@ -1919,7 +1919,8 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads, struct rte_eth_dev_info dev_info; struct rte_eth_txconf *txconf; uint16_t nb_tx_queue, nb_rx_queue; - uint16_t tx_queueid, rx_queueid, queue, lcore_id; + uint16_t tx_queueid, rx_queueid, queue; + uint32_t lcore_id; int32_t ret, socket_id; struct lcore_conf *qconf; struct rte_ether_addr ethaddr; diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index c321108119..b52b0ffc3d 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -259,7 +259,7 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx_lcore[], continue; /* Looking for cryptodev, which can handle this SA */ - key.lcore_id = (uint8_t)lcore_id; + key.lcore_id = lcore_id; key.cipher_algo = (uint8_t)sa->cipher_algo; key.auth_algo = (uint8_t)sa->auth_algo; key.aead_algo = (uint8_t)sa->aead_algo; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index bdcada1c40..6526a80d81 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -256,11 +256,11 @@ extern struct offloads tx_offloads; * (hash key calculation reads 8 bytes if this struct is size 5 bytes). */ struct cdev_key { - uint16_t lcore_id; + uint32_t lcore_id; uint8_t cipher_algo; uint8_t auth_algo; uint8_t aead_algo; - uint8_t padding[3]; /* padding to 8-byte size should be zeroed */ + uint8_t padding; /* padding to 8-byte size should be zeroed */ }; struct socket_ctx { @@ -285,7 +285,7 @@ struct cnt_blk { struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; void *sec_ctx; } __rte_cache_aligned; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 8d122e8519..90a4c38ba4 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -1598,8 +1598,7 @@ ipsec_poll_mode_wrkr_inl_pr(void) int32_t socket_id; uint32_t lcore_id; int32_t i, nb_rx; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; prev_tsc = 0; lcore_id = rte_lcore_id(); @@ -1633,7 +1632,7 @@ ipsec_poll_mode_wrkr_inl_pr(void) portid = rxql[i].port_id; queueid = rxql[i].queue_id; RTE_LOG(INFO, IPSEC, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } @@ -1729,8 +1728,7 @@ ipsec_poll_mode_wrkr_inl_pr_ss(void) uint32_t i, nb_rx, j; int32_t socket_id; uint32_t lcore_id; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; prev_tsc = 0; lcore_id = rte_lcore_id(); @@ -1764,7 +1762,7 @@ ipsec_poll_mode_wrkr_inl_pr_ss(void) portid = rxql[i].port_id; queueid = rxql[i].queue_id; RTE_LOG(INFO, IPSEC, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } -- 2.25.1
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: 08bd1a174461 ("examples/l3fwd-graph: add graph-based l3fwd skeleton") Cc: ndabilpuram@marvell.com Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> --- examples/l3fwd-graph/main.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 96cb1c81ff..557ac6d823 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -90,7 +90,7 @@ static int pcap_trace_enable; struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; char node_name[RTE_NODE_NAMESIZE]; }; @@ -110,8 +110,8 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE]; struct lcore_params { uint16_t port_id; - uint8_t queue_id; - uint8_t lcore_id; + uint16_t queue_id; + uint32_t lcore_id; } __rte_cache_aligned; static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; @@ -205,19 +205,19 @@ check_worker_model_params(void) static int check_lcore_params(void) { - uint8_t queue, lcore; + uint16_t queue, i; int socketid; - uint16_t i; + uint32_t lcore; for (i = 0; i < nb_lcore_params; ++i) { queue = lcore_params[i].queue_id; if (queue >= MAX_RX_QUEUE_PER_PORT) { - printf("Invalid queue number: %hhu\n", queue); + printf("Invalid queue number: %hu\n", queue); return -1; } lcore = lcore_params[i].lcore_id; if (!rte_lcore_is_enabled(lcore)) { - printf("Error: lcore %hhu is not enabled in lcore mask\n", + printf("Error: lcore %u is not enabled in lcore mask\n", lcore); return -1; } @@ -228,7 +228,7 @@ check_lcore_params(void) } socketid = rte_lcore_to_socket_id(lcore); if ((socketid != 0) && (numa_on == 0)) { - printf("Warning: lcore %hhu is on socket %d with numa off\n", + printf("Warning: lcore %u is on socket %d with numa off\n", lcore, socketid); } } @@ -257,7 +257,7 @@ check_port_config(void) return 0; } -static uint8_t +static uint16_t get_port_n_rx_queues(const uint16_t port) { int queue = -1; @@ -275,14 +275,14 @@ get_port_n_rx_queues(const uint16_t port) } } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int init_lcore_rx_queues(void) { uint16_t i, nb_rx_queue; - uint8_t lcore; + uint32_t lcore; for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; @@ -290,7 +290,7 @@ init_lcore_rx_queues(void) if (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) { printf("Error: too many queues (%u) for lcore: %u\n", (unsigned int)nb_rx_queue + 1, - (unsigned int)lcore); + lcore); return -1; } @@ -448,11 +448,11 @@ parse_config(const char *q_arg) } lcore_params_array[nb_lcore_params].port_id = - (uint8_t)int_fld[FLD_PORT]; + (uint16_t)int_fld[FLD_PORT]; lcore_params_array[nb_lcore_params].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; lcore_params_array[nb_lcore_params].lcore_id = - (uint8_t)int_fld[FLD_LCORE]; + (uint32_t)int_fld[FLD_LCORE]; ++nb_lcore_params; } lcore_params = lcore_params_array; @@ -1011,7 +1011,8 @@ main(int argc, char **argv) "ethdev_tx-*", "pkt_drop", }; - uint8_t nb_rx_queue, queue, socketid; + uint8_t socketid; + uint16_t nb_rx_queue, queue; struct rte_graph_param graph_conf; struct rte_eth_dev_info dev_info; uint32_t nb_ports, nb_conf = 0; -- 2.25.1
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: f88e7c175a68 ("examples/l3fwd-power: add high/regular perf cores options") Cc: radu.nicolau@intel.com Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> --- examples/l3fwd-power/main.c | 59 ++++++++++++++++---------------- examples/l3fwd-power/main.h | 4 +-- examples/l3fwd-power/perf_core.c | 16 +++++---- 3 files changed, 41 insertions(+), 38 deletions(-) diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index f4adcf41b5..4430605df0 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -214,7 +214,7 @@ enum freq_scale_hint_t struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; enum freq_scale_hint_t freq_up_hint; uint32_t zero_rx_packet_count; uint32_t idle_hint; @@ -838,7 +838,7 @@ sleep_until_rx_interrupt(int num, int lcore) struct rte_epoll_event event[num]; int n, i; uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; void *data; if (status[lcore].wakeup) { @@ -850,9 +850,9 @@ sleep_until_rx_interrupt(int num, int lcore) n = rte_epoll_wait(RTE_EPOLL_PER_THREAD, event, num, 10); for (i = 0; i < n; i++) { data = event[i].epdata.data; - port_id = ((uintptr_t)data) >> CHAR_BIT; + port_id = ((uintptr_t)data) >> (sizeof(uint16_t) * CHAR_BIT); queue_id = ((uintptr_t)data) & - RTE_LEN2MASK(CHAR_BIT, uint8_t); + RTE_LEN2MASK((sizeof(uint16_t) * CHAR_BIT), uint16_t); RTE_LOG(INFO, L3FWD_POWER, "lcore %u is waked up from rx interrupt on" " port %d queue %d\n", @@ -867,7 +867,7 @@ static void turn_on_off_intr(struct lcore_conf *qconf, bool on) { int i; struct lcore_rx_queue *rx_queue; - uint8_t queue_id; + uint16_t queue_id; uint16_t port_id; for (i = 0; i < qconf->n_rx_queue; ++i) { @@ -887,7 +887,7 @@ static void turn_on_off_intr(struct lcore_conf *qconf, bool on) static int event_register(struct lcore_conf *qconf) { struct lcore_rx_queue *rx_queue; - uint8_t queueid; + uint16_t queueid; uint16_t portid; uint32_t data; int ret; @@ -897,7 +897,7 @@ static int event_register(struct lcore_conf *qconf) rx_queue = &(qconf->rx_queue_list[i]); portid = rx_queue->port_id; queueid = rx_queue->queue_id; - data = portid << CHAR_BIT | queueid; + data = portid << (sizeof(uint16_t) * CHAR_BIT) | queueid; ret = rte_eth_dev_rx_intr_ctl_q(portid, queueid, RTE_EPOLL_PER_THREAD, @@ -917,8 +917,7 @@ static int main_intr_loop(__rte_unused void *dummy) unsigned int lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; uint32_t lcore_rx_idle_count = 0; @@ -946,7 +945,7 @@ static int main_intr_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } @@ -1083,8 +1082,7 @@ main_telemetry_loop(__rte_unused void *dummy) unsigned int lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc, prev_tel_tsc; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; uint64_t ep_nep[2] = {0}, fp_nfp[2] = {0}; @@ -1114,7 +1112,7 @@ main_telemetry_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, " -- lcoreid=%u portid=%u " - "rxqueueid=%hhu\n", lcore_id, portid, queueid); + "rxqueueid=%hu\n", lcore_id, portid, queueid); } while (!is_done()) { @@ -1205,8 +1203,7 @@ main_legacy_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc, tim_res_tsc, hz; uint64_t prev_tsc_power = 0, cur_tsc_power, diff_tsc_power; int i, j, nb_rx; - uint8_t queueid; - uint16_t portid; + uint16_t portid, queueid; struct lcore_conf *qconf; struct lcore_rx_queue *rx_queue; enum freq_scale_hint_t lcore_scaleup_hint; @@ -1234,7 +1231,7 @@ main_legacy_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD_POWER, " -- lcoreid=%u portid=%u " - "rxqueueid=%hhu\n", lcore_id, portid, queueid); + "rxqueueid=%hu\n", lcore_id, portid, queueid); } /* add into event wait list */ @@ -1399,25 +1396,25 @@ main_legacy_loop(__rte_unused void *dummy) static int check_lcore_params(void) { - uint8_t queue, lcore; - uint16_t i; + uint16_t queue, i; + uint32_t lcore; int socketid; for (i = 0; i < nb_lcore_params; ++i) { queue = lcore_params[i].queue_id; if (queue >= MAX_RX_QUEUE_PER_PORT) { - printf("invalid queue number: %hhu\n", queue); + printf("invalid queue number: %hu\n", queue); return -1; } lcore = lcore_params[i].lcore_id; if (!rte_lcore_is_enabled(lcore)) { - printf("error: lcore %hhu is not enabled in lcore " + printf("error: lcore %u is not enabled in lcore " "mask\n", lcore); return -1; } if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && (numa_on == 0)) { - printf("warning: lcore %hhu is on socket %d with numa " + printf("warning: lcore %u is on socket %d with numa " "off\n", lcore, socketid); } if (app_mode == APP_MODE_TELEMETRY && lcore == rte_lcore_id()) { @@ -1451,7 +1448,7 @@ check_port_config(void) return 0; } -static uint8_t +static uint16_t get_port_n_rx_queues(const uint16_t port) { int queue = -1; @@ -1462,14 +1459,14 @@ get_port_n_rx_queues(const uint16_t port) lcore_params[i].queue_id > queue) queue = lcore_params[i].queue_id; } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int init_lcore_rx_queues(void) { uint16_t i, nb_rx_queue; - uint8_t lcore; + uint32_t lcore; for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; @@ -1661,6 +1658,8 @@ parse_config(const char *q_arg) char *str_fld[_NUM_FLD]; int i; unsigned size; + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, + USHRT_MAX, RTE_MAX_LCORE}; nb_lcore_params = 0; @@ -1681,7 +1680,7 @@ parse_config(const char *q_arg) errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); if (errno != 0 || end == str_fld[i] || int_fld[i] > - 255) + max_fld[i]) return -1; } if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -1690,11 +1689,11 @@ parse_config(const char *q_arg) return -1; } lcore_params_array[nb_lcore_params].port_id = - (uint8_t)int_fld[FLD_PORT]; + (uint16_t)int_fld[FLD_PORT]; lcore_params_array[nb_lcore_params].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; lcore_params_array[nb_lcore_params].lcore_id = - (uint8_t)int_fld[FLD_LCORE]; + (uint32_t)int_fld[FLD_LCORE]; ++nb_lcore_params; } lcore_params = lcore_params_array; @@ -2501,8 +2500,8 @@ main(int argc, char **argv) uint64_t hz; uint32_t n_tx_queue, nb_lcores; uint32_t dev_rxq_num, dev_txq_num; - uint8_t nb_rx_queue, queue, socketid; - uint16_t portid; + uint8_t socketid; + uint16_t portid, nb_rx_queue, queue; const char *ptr_strings[NUM_TELSTATS]; /* init EAL */ diff --git a/examples/l3fwd-power/main.h b/examples/l3fwd-power/main.h index 258de98f5b..194bd82102 100644 --- a/examples/l3fwd-power/main.h +++ b/examples/l3fwd-power/main.h @@ -9,8 +9,8 @@ #define MAX_LCORE_PARAMS 1024 struct lcore_params { uint16_t port_id; - uint8_t queue_id; - uint8_t lcore_id; + uint16_t queue_id; + uint32_t lcore_id; } __rte_cache_aligned; extern struct lcore_params *lcore_params; diff --git a/examples/l3fwd-power/perf_core.c b/examples/l3fwd-power/perf_core.c index 41ef6d0c9a..c2cdc4bf49 100644 --- a/examples/l3fwd-power/perf_core.c +++ b/examples/l3fwd-power/perf_core.c @@ -22,9 +22,9 @@ static uint16_t nb_hp_lcores; struct perf_lcore_params { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; uint8_t high_perf; - uint8_t lcore_idx; + uint32_t lcore_idx; } __rte_cache_aligned; static struct perf_lcore_params prf_lc_prms[MAX_LCORE_PARAMS]; @@ -132,6 +132,8 @@ parse_perf_config(const char *q_arg) char *str_fld[_NUM_FLD]; int i; unsigned int size; + unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, USHRT_MAX, + UCHAR_MAX, RTE_MAX_LCORE}; nb_prf_lc_prms = 0; @@ -152,7 +154,9 @@ parse_perf_config(const char *q_arg) for (i = 0; i < _NUM_FLD; i++) { errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) + if (errno != 0 || end == str_fld[i] || int_fld[i] > + max_fld[i]) + return -1; } if (nb_prf_lc_prms >= MAX_LCORE_PARAMS) { @@ -161,13 +165,13 @@ parse_perf_config(const char *q_arg) return -1; } prf_lc_prms[nb_prf_lc_prms].port_id = - (uint8_t)int_fld[FLD_PORT]; + (uint16_t)int_fld[FLD_PORT]; prf_lc_prms[nb_prf_lc_prms].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; prf_lc_prms[nb_prf_lc_prms].high_perf = !!(uint8_t)int_fld[FLD_LCORE_HP]; prf_lc_prms[nb_prf_lc_prms].lcore_idx = - (uint8_t)int_fld[FLD_LCORE_IDX]; + (uint32_t)int_fld[FLD_LCORE_IDX]; ++nb_prf_lc_prms; } -- 2.25.1
Currently the config option allows lcore IDs up to 255, irrespective of RTE_MAX_LCORES and needs to be fixed. The patch allows config options based on DPDK config. Fixes: af75078fece3 ("first public release") Cc: stable@dpdk.org Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com> --- examples/l3fwd/l3fwd.h | 2 +- examples/l3fwd/l3fwd_acl.c | 4 ++-- examples/l3fwd/l3fwd_em.c | 4 ++-- examples/l3fwd/l3fwd_event.h | 2 +- examples/l3fwd/l3fwd_fib.c | 4 ++-- examples/l3fwd/l3fwd_lpm.c | 5 ++--- examples/l3fwd/main.c | 40 ++++++++++++++++++++---------------- 7 files changed, 32 insertions(+), 29 deletions(-) diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h index e7ae0e5834..12c264cb4c 100644 --- a/examples/l3fwd/l3fwd.h +++ b/examples/l3fwd/l3fwd.h @@ -74,7 +74,7 @@ struct mbuf_table { struct lcore_rx_queue { uint16_t port_id; - uint8_t queue_id; + uint16_t queue_id; } __rte_cache_aligned; struct lcore_conf { diff --git a/examples/l3fwd/l3fwd_acl.c b/examples/l3fwd/l3fwd_acl.c index 401692bcec..2bd63181bc 100644 --- a/examples/l3fwd/l3fwd_acl.c +++ b/examples/l3fwd/l3fwd_acl.c @@ -997,7 +997,7 @@ acl_main_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; uint16_t portid; - uint8_t queueid; + uint16_t queueid; struct lcore_conf *qconf; int socketid; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) @@ -1020,7 +1020,7 @@ acl_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c index 40e102b38a..cd2bb4a4bb 100644 --- a/examples/l3fwd/l3fwd_em.c +++ b/examples/l3fwd/l3fwd_em.c @@ -586,7 +586,7 @@ em_main_loop(__rte_unused void *dummy) unsigned lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; - uint8_t queueid; + uint16_t queueid; uint16_t portid; struct lcore_conf *qconf; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / @@ -609,7 +609,7 @@ em_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h index 9aad358003..c6a4a89127 100644 --- a/examples/l3fwd/l3fwd_event.h +++ b/examples/l3fwd/l3fwd_event.h @@ -78,8 +78,8 @@ struct l3fwd_event_resources { uint8_t deq_depth; uint8_t has_burst; uint8_t enabled; - uint8_t eth_rx_queues; uint8_t vector_enabled; + uint16_t eth_rx_queues; uint16_t vector_size; uint64_t vector_tmo_ns; }; diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c index 6a21984415..7da55f707a 100644 --- a/examples/l3fwd/l3fwd_fib.c +++ b/examples/l3fwd/l3fwd_fib.c @@ -186,7 +186,7 @@ fib_main_loop(__rte_unused void *dummy) uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; uint16_t portid; - uint8_t queueid; + uint16_t queueid; struct lcore_conf *qconf; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; @@ -208,7 +208,7 @@ fib_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c index a484a33089..01d38bc69c 100644 --- a/examples/l3fwd/l3fwd_lpm.c +++ b/examples/l3fwd/l3fwd_lpm.c @@ -148,8 +148,7 @@ lpm_main_loop(__rte_unused void *dummy) unsigned lcore_id; uint64_t prev_tsc, diff_tsc, cur_tsc; int i, nb_rx; - uint16_t portid; - uint8_t queueid; + uint16_t portid, queueid; struct lcore_conf *qconf; const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; @@ -171,7 +170,7 @@ lpm_main_loop(__rte_unused void *dummy) portid = qconf->rx_queue_list[i].port_id; queueid = qconf->rx_queue_list[i].queue_id; RTE_LOG(INFO, L3FWD, - " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + " -- lcoreid=%u portid=%u rxqueueid=%hu\n", lcore_id, portid, queueid); } diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 8d32ae1dd5..19e4d9dfa2 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -98,8 +98,8 @@ struct parm_cfg parm_config; struct lcore_params { uint16_t port_id; - uint8_t queue_id; - uint8_t lcore_id; + uint16_t queue_id; + uint32_t lcore_id; } __rte_cache_aligned; static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS]; @@ -292,24 +292,24 @@ setup_l3fwd_lookup_tables(void) static int check_lcore_params(void) { - uint8_t queue, lcore; - uint16_t i; + uint16_t queue, i; + uint32_t lcore; int socketid; for (i = 0; i < nb_lcore_params; ++i) { queue = lcore_params[i].queue_id; if (queue >= MAX_RX_QUEUE_PER_PORT) { - printf("invalid queue number: %hhu\n", queue); + printf("invalid queue number: %hu\n", queue); return -1; } lcore = lcore_params[i].lcore_id; if (!rte_lcore_is_enabled(lcore)) { - printf("error: lcore %hhu is not enabled in lcore mask\n", lcore); + printf("error: lcore %u is not enabled in lcore mask\n", lcore); return -1; } if ((socketid = rte_lcore_to_socket_id(lcore) != 0) && (numa_on == 0)) { - printf("warning: lcore %hhu is on socket %d with numa off \n", + printf("warning: lcore %u is on socket %d with numa off\n", lcore, socketid); } } @@ -336,7 +336,7 @@ check_port_config(void) return 0; } -static uint8_t +static uint16_t get_port_n_rx_queues(const uint16_t port) { int queue = -1; @@ -352,21 +352,21 @@ get_port_n_rx_queues(const uint16_t port) lcore_params[i].port_id); } } - return (uint8_t)(++queue); + return (uint16_t)(++queue); } static int init_lcore_rx_queues(void) { uint16_t i, nb_rx_queue; - uint8_t lcore; + uint32_t lcore; for (i = 0; i < nb_lcore_params; ++i) { lcore = lcore_params[i].lcore_id; nb_rx_queue = lcore_conf[lcore].n_rx_queue; if (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) { printf("error: too many queues (%u) for lcore: %u\n", - (unsigned)nb_rx_queue + 1, (unsigned)lcore); + (unsigned int)nb_rx_queue + 1, lcore); return -1; } else { lcore_conf[lcore].rx_queue_list[nb_rx_queue].port_id = @@ -500,6 +500,8 @@ parse_config(const char *q_arg) char *str_fld[_NUM_FLD]; int i; unsigned size; + uint32_t max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS, + USHRT_MAX, RTE_MAX_LCORE}; nb_lcore_params = 0; @@ -518,7 +520,8 @@ parse_config(const char *q_arg) for (i = 0; i < _NUM_FLD; i++){ errno = 0; int_fld[i] = strtoul(str_fld[i], &end, 0); - if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) + if (errno != 0 || end == str_fld[i] || int_fld[i] > + max_fld[i]) return -1; } if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -527,11 +530,11 @@ parse_config(const char *q_arg) return -1; } lcore_params_array[nb_lcore_params].port_id = - (uint8_t)int_fld[FLD_PORT]; + (uint16_t)int_fld[FLD_PORT]; lcore_params_array[nb_lcore_params].queue_id = - (uint8_t)int_fld[FLD_QUEUE]; + (uint16_t)int_fld[FLD_QUEUE]; lcore_params_array[nb_lcore_params].lcore_id = - (uint8_t)int_fld[FLD_LCORE]; + (uint32_t)int_fld[FLD_LCORE]; ++nb_lcore_params; } lcore_params = lcore_params_array; @@ -630,7 +633,7 @@ parse_event_eth_rx_queues(const char *eth_rx_queues) { struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc(); char *end = NULL; - uint8_t num_eth_rx_queues; + uint16_t num_eth_rx_queues; /* parse decimal string */ num_eth_rx_queues = strtoul(eth_rx_queues, &end, 10); @@ -1211,7 +1214,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf, static void l3fwd_poll_resource_setup(void) { - uint8_t nb_rx_queue, queue, socketid; + uint8_t socketid; + uint16_t nb_rx_queue, queue; struct rte_eth_dev_info dev_info; uint32_t n_tx_queue, nb_lcores; struct rte_eth_txconf *txconf; @@ -1535,7 +1539,7 @@ main(int argc, char **argv) struct lcore_conf *qconf; uint16_t queueid, portid; unsigned int lcore_id; - uint8_t queue; + uint16_t queue; int ret; /* init EAL */ -- 2.25.1
[-- Attachment #1: Type: text/plain, Size: 3917 bytes --] Hi Luca, The patch is good to be merged. Please go ahead. Thanks, Kishore -----Original Message----- From: luca.boccassi@gmail.com <luca.boccassi@gmail.com> Sent: Monday, March 18, 2024 11:39 AM To: Kishore Padmanabha <kishore.padmanabha@broadcom.com> Cc: Ajit Khaparde <ajit.khaparde@broadcom.com>; dpdk stable <stable@dpdk.org> Subject: patch 'net/bnxt: fix number of Tx queues being created' has been queued to stable release 22.11.5 Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/aead7fda3f5b68acb036d33ada05aa 8e39643566 Thanks. Luca Boccassi --- From aead7fda3f5b68acb036d33ada05aa8e39643566 Mon Sep 17 00:00:00 2001 From: Kishore Padmanabha <kishore.padmanabha@broadcom.com> Date: Mon, 13 Nov 2023 11:08:52 -0500 Subject: [PATCH] net/bnxt: fix number of Tx queues being created [ upstream commit 05b67582cc93128bbf2eb26726d781b8c5c561b3 ] The number of Tx queues for the representor port is limited by number of Rx rings instead of Tx rings. Fixes: 322bd6e70272 ("net/bnxt: add port representor infrastructure") Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> --- drivers/net/bnxt/bnxt_reps.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index f700adb629..d014714b93 100644 --- a/drivers/net/bnxt/bnxt_reps.c +++ b/drivers/net/bnxt/bnxt_reps.c @@ -739,10 +739,10 @@ int bnxt_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev, struct bnxt_tx_queue *parent_txq, *txq; struct bnxt_vf_rep_tx_queue *vfr_txq; - if (queue_idx >= rep_bp->rx_nr_rings) { + if (queue_idx >= rep_bp->tx_nr_rings) { PMD_DRV_LOG(ERR, "Cannot create Tx rings %d. %d rings available\n", - queue_idx, rep_bp->rx_nr_rings); + queue_idx, rep_bp->tx_nr_rings); return -EINVAL; } -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.474735375 +0000 +++ 0027-net-bnxt-fix-number-of-Tx-queues-being-created.patch 2024-03-18 12:58:39.275349175 +0000 @@ -1 +1 @@ -From 05b67582cc93128bbf2eb26726d781b8c5c561b3 Mon Sep 17 00:00:00 2001 +From aead7fda3f5b68acb036d33ada05aa8e39643566 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 05b67582cc93128bbf2eb26726d781b8c5c561b3 ] + @@ -10 +11,0 @@ -Cc: stable@dpdk.org @@ -19 +20 @@ -index edcc27f556..79b3583636 100644 +index f700adb629..d014714b93 100644 -- This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it. [-- Attachment #2: S/MIME Cryptographic Signature --] [-- Type: application/pkcs7-signature, Size: 4227 bytes --]
Hi commit authors (and maintainers), Despite being selected by the DPDK maintenance tool ./devtools/git-log-fixes.sh I didn't apply following commits from DPDK main to 22.11 stable branch, as conflicts or build errors occur. Can authors check your patches in the following list and either: - Backport your patches to the 22.11 branch, or - Indicate that the patch should not be backported Please do either of the above by 03/25/24. You can find the a temporary work-in-progress branch of the coming 22.11.5 release at: https://github.com/bluca/dpdk-stable It is recommended to backport on top of that to minimize further conflicts or misunderstandings. Some notes on stable backports: A backport should contain a reference to the DPDK main branch commit in it's commit message in the following fashion: [ upstream commit <commit's dpdk main branch SHA-1 checksum> ] For example: https://git.dpdk.org/dpdk-stable/commit/?h=18.11&id=d90e6ae6f936ecdc2fd3811ff9f26aec7f3c06eb When sending the backported patch, please indicate the target branch in the subject line, as we have multiple branches, for example: [PATCH 22.11] foo/bar: fix baz With git format-patch, this can be achieved by appending the parameter: --subject-prefix='PATCH 22.11' Send the backported patch to "stable@dpdk.org" but not "dev@dpdk.org". FYI, branch 22.11 is located at tree: https://git.dpdk.org/dpdk-stable Thanks. Luca Boccassi --- 5ecc8df4fa Dariusz Sosnowski net/mlx5: fix async flow create error handling ff9433b578 Dariusz Sosnowski net/mlx5: fix flow configure validation 727283742a Dariusz Sosnowski net/mlx5: fix rollback on failed flow configure 4359d9d1f7 Gregory Etelson net/mlx5: fix sync meter processing in HWS
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/6a73ac3d7a5cd78efd3bde7bfb2624342c79799e Thanks. Luca Boccassi --- From 6a73ac3d7a5cd78efd3bde7bfb2624342c79799e Mon Sep 17 00:00:00 2001 From: Shihong Wang <shihong.wang@corigine.com> Date: Mon, 11 Mar 2024 10:32:47 +0800 Subject: [PATCH] examples/ipsec-secgw: fix Rx queue ID in Rx callback [ upstream commit 179e9b44ac6d64dc10ee3116e44b66f12d43e7a8 ] The Rx queue ID on the core and on the port are not necessarily equal, for example, there are two RX queues on core0, queue0 and queue1, queue0 is the rx_queueid0 on port0 and queue1 is the rx_queueid0 on port1. The 'rte_eth_add_rx_callback()' function is based on the port registration callback function, so should be the RX queue ID on the port. Fixes: d04bb1c52647 ("examples/ipsec-secgw: use HW parsed packet type in poll mode") Signed-off-by: Shihong Wang <shihong.wang@corigine.com> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com> Reviewed-by: Peng Zhang <peng.zhang@corigine.com> --- examples/ipsec-secgw/ipsec-secgw.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 86ad2b0ea5..e4c3482411 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2056,10 +2056,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads, /* Register Rx callback if ptypes are not supported */ if (!ptype_supported && - !rte_eth_add_rx_callback(portid, queue, + !rte_eth_add_rx_callback(portid, rx_queueid, parse_ptype_cb, NULL)) { printf("Failed to add rx callback: port=%d, " - "queue=%d\n", portid, queue); + "rx_queueid=%d\n", portid, rx_queueid); } -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.512402326 +0000 +++ 0028-examples-ipsec-secgw-fix-Rx-queue-ID-in-Rx-callback.patch 2024-03-18 12:58:39.275349175 +0000 @@ -1 +1 @@ -From 179e9b44ac6d64dc10ee3116e44b66f12d43e7a8 Mon Sep 17 00:00:00 2001 +From 6a73ac3d7a5cd78efd3bde7bfb2624342c79799e Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 179e9b44ac6d64dc10ee3116e44b66f12d43e7a8 ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -26 +27 @@ -index a61bea400a..45a303850d 100644 +index 86ad2b0ea5..e4c3482411 100644 @@ -29 +30 @@ -@@ -2093,10 +2093,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads, +@@ -2056,10 +2056,10 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads,
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/aead7fda3f5b68acb036d33ada05aa8e39643566 Thanks. Luca Boccassi --- From aead7fda3f5b68acb036d33ada05aa8e39643566 Mon Sep 17 00:00:00 2001 From: Kishore Padmanabha <kishore.padmanabha@broadcom.com> Date: Mon, 13 Nov 2023 11:08:52 -0500 Subject: [PATCH] net/bnxt: fix number of Tx queues being created [ upstream commit 05b67582cc93128bbf2eb26726d781b8c5c561b3 ] The number of Tx queues for the representor port is limited by number of Rx rings instead of Tx rings. Fixes: 322bd6e70272 ("net/bnxt: add port representor infrastructure") Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> --- drivers/net/bnxt/bnxt_reps.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index f700adb629..d014714b93 100644 --- a/drivers/net/bnxt/bnxt_reps.c +++ b/drivers/net/bnxt/bnxt_reps.c @@ -739,10 +739,10 @@ int bnxt_rep_tx_queue_setup_op(struct rte_eth_dev *eth_dev, struct bnxt_tx_queue *parent_txq, *txq; struct bnxt_vf_rep_tx_queue *vfr_txq; - if (queue_idx >= rep_bp->rx_nr_rings) { + if (queue_idx >= rep_bp->tx_nr_rings) { PMD_DRV_LOG(ERR, "Cannot create Tx rings %d. %d rings available\n", - queue_idx, rep_bp->rx_nr_rings); + queue_idx, rep_bp->tx_nr_rings); return -EINVAL; } -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.474735375 +0000 +++ 0027-net-bnxt-fix-number-of-Tx-queues-being-created.patch 2024-03-18 12:58:39.275349175 +0000 @@ -1 +1 @@ -From 05b67582cc93128bbf2eb26726d781b8c5c561b3 Mon Sep 17 00:00:00 2001 +From aead7fda3f5b68acb036d33ada05aa8e39643566 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 05b67582cc93128bbf2eb26726d781b8c5c561b3 ] + @@ -10 +11,0 @@ -Cc: stable@dpdk.org @@ -19 +20 @@ -index edcc27f556..79b3583636 100644 +index f700adb629..d014714b93 100644
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/f6a26c88648f181f03fc00b3731f8174d97dc222 Thanks. Luca Boccassi --- From f6a26c88648f181f03fc00b3731f8174d97dc222 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com> Date: Mon, 16 Jan 2023 14:07:23 +0100 Subject: [PATCH] net/mlx5: fix warning about copy length MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ upstream commit c19580fb8e9ad6f153d46f731ec7cd2050b3021b ] Use RTE_PTR_ADD where copying to the offset of a field in a structure holding multiple fields, to avoid compiler warnings with decorated rte_memcpy. Fixes: 16a7dbc4f690 ("net/mlx5: make flow modify action list thread safe") Signed-off-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 85dcc399c2..1069b84157 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5688,7 +5688,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) "cannot allocate resource memory"); return NULL; } - rte_memcpy(&entry->ft_type, + rte_memcpy(RTE_PTR_ADD(entry, offsetof(typeof(*entry), ft_type)), RTE_PTR_ADD(ref, offsetof(typeof(*ref), ft_type)), key_len + data_len); if (entry->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.421415580 +0000 +++ 0026-net-mlx5-fix-warning-about-copy-length.patch 2024-03-18 12:58:39.271349065 +0000 @@ -1 +1 @@ -From c19580fb8e9ad6f153d46f731ec7cd2050b3021b Mon Sep 17 00:00:00 2001 +From f6a26c88648f181f03fc00b3731f8174d97dc222 Mon Sep 17 00:00:00 2001 @@ -8,0 +9,2 @@ +[ upstream commit c19580fb8e9ad6f153d46f731ec7cd2050b3021b ] + @@ -14 +15,0 @@ -Cc: stable@dpdk.org @@ -23 +24 @@ -index 4badde1a9a..d434c678c8 100644 +index 85dcc399c2..1069b84157 100644 @@ -26 +27 @@ -@@ -6205,7 +6205,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) +@@ -5688,7 +5688,7 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx)
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/905283004dcdfb919e005f55c9a89c8a9e10ed6a Thanks. Luca Boccassi --- From 905283004dcdfb919e005f55c9a89c8a9e10ed6a Mon Sep 17 00:00:00 2001 From: Bing Zhao <bingz@nvidia.com> Date: Fri, 8 Mar 2024 05:22:37 +0200 Subject: [PATCH] net/mlx5: fix drop action release timing [ upstream commit 22a3761b782b7c46ca428209b15b4f7382a40a62 ] When creating the drop action Devx object, the global counter set is also used as in the regular or hairpin queue creation. The drop action should be destroyed before the global counter set release procedure. Or else, the counter set object is still referenced and cannot be released successfully. This would cause the counter set resources to be exhausted after starting and stopping the ports repeatedly. Fixes: 65b3cd0dc39b ("net/mlx5: create global drop action") Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 4d76da484b..96e732950d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2084,12 +2084,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) priv->txqs = NULL; } mlx5_proc_priv_uninit(dev); + if (priv->drop_queue.hrxq) + mlx5_drop_action_destroy(dev); if (priv->q_counters) { mlx5_devx_cmd_destroy(priv->q_counters); priv->q_counters = NULL; } - if (priv->drop_queue.hrxq) - mlx5_drop_action_destroy(dev); if (priv->mreg_cp_tbl) mlx5_hlist_destroy(priv->mreg_cp_tbl); mlx5_mprq_free_mp(dev); -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.381536566 +0000 +++ 0025-net-mlx5-fix-drop-action-release-timing.patch 2024-03-18 12:58:39.255348625 +0000 @@ -1 +1 @@ -From 22a3761b782b7c46ca428209b15b4f7382a40a62 Mon Sep 17 00:00:00 2001 +From 905283004dcdfb919e005f55c9a89c8a9e10ed6a Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 22a3761b782b7c46ca428209b15b4f7382a40a62 ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -25 +26 @@ -index 8b54843a43..d1a63822a5 100644 +index 4d76da484b..96e732950d 100644 @@ -28 +29 @@ -@@ -2382,12 +2382,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) +@@ -2084,12 +2084,12 @@ mlx5_dev_close(struct rte_eth_dev *dev)
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/0ddc41f5c64f3f473f562d8bea8a5b8a68b05a32 Thanks. Luca Boccassi --- From 0ddc41f5c64f3f473f562d8bea8a5b8a68b05a32 Mon Sep 17 00:00:00 2001 From: Bing Zhao <bingz@nvidia.com> Date: Thu, 7 Mar 2024 10:09:24 +0200 Subject: [PATCH] net/mlx5: fix age position in hairpin split [ upstream commit 4c89815eab7471b98388dc958b95777d341f05fc ] When splitting a hairpin rule implicitly, the count action will be on either Tx or Rx subflow based on the encapsulation checking. Once there is a flow rule with both count and age action, one counter will be reused. If there is only age action and the ASO flow hit is supported, the flow hit will be chosen instead of a counter. In the previous flow splitting, the age would always be in the Rx part, while the count would be on the Tx part when there is an encap. Before this commit, 2 issues can be observed with a hairpin split: 1. On the root table, one counter was used on both Rx and Tx parts for age and count actions. Then one ingress packet will be counted twice. 2. On the non-root table, an extra ASO flow hit was used on the Rx part. This would cause some overhead. The age and count actions should be in the same subflow instead of 2. Fixes: daed4b6e3db2 ("net/mlx5: use aging by counter when counter exists") Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 3 +-- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 89c98f95f9..1e8d9ac978 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -5135,6 +5135,7 @@ flow_hairpin_split(struct rte_eth_dev *dev, } break; case RTE_FLOW_ACTION_TYPE_COUNT: + case RTE_FLOW_ACTION_TYPE_AGE: if (encap) { rte_memcpy(actions_tx, actions, sizeof(struct rte_flow_action)); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 68d3ee0c36..85dcc399c2 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -18585,8 +18585,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev, LIST_FOREACH(act, &age_info->aged_aso, next) { nb_flows++; if (nb_contexts) { - context[nb_flows - 1] = - act->age_params.context; + context[nb_flows - 1] = act->age_params.context; if (!(--nb_contexts)) break; } -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.319119761 +0000 +++ 0024-net-mlx5-fix-age-position-in-hairpin-split.patch 2024-03-18 12:58:39.251348516 +0000 @@ -1 +1 @@ -From 4c89815eab7471b98388dc958b95777d341f05fc Mon Sep 17 00:00:00 2001 +From 0ddc41f5c64f3f473f562d8bea8a5b8a68b05a32 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 4c89815eab7471b98388dc958b95777d341f05fc ] + @@ -26 +27,0 @@ -Cc: stable@dpdk.org @@ -36 +37 @@ -index 6484874c35..f31fdfbf3d 100644 +index 89c98f95f9..1e8d9ac978 100644 @@ -39 +40 @@ -@@ -5399,6 +5399,7 @@ flow_hairpin_split(struct rte_eth_dev *dev, +@@ -5135,6 +5135,7 @@ flow_hairpin_split(struct rte_eth_dev *dev, @@ -48 +49 @@ -index 80239bebee..4badde1a9a 100644 +index 68d3ee0c36..85dcc399c2 100644 @@ -51 +52 @@ -@@ -19361,8 +19361,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev, +@@ -18585,8 +18585,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev,
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/3b972375de7599a243be9094e0903d8ab29f2c1f Thanks. Luca Boccassi --- From 3b972375de7599a243be9094e0903d8ab29f2c1f Mon Sep 17 00:00:00 2001 From: Eli Britstein <elibr@nvidia.com> Date: Thu, 7 Mar 2024 08:13:45 +0200 Subject: [PATCH] net/mlx5: prevent ioctl failure log flooding [ upstream commit 84ba1440c5dff8d716c1a2643aa3eb5e806619ff ] The following log is printed in WARNING severity: mlx5_net: port 1 ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed: Operation not supported Reduce the severity to DEBUG to prevent this log from flooding when there are hundreds of ports probed without supporting this flow ctrl query. Fixes: 1256805dd54d ("net/mlx5: move Linux-specific functions") Signed-off-by: Eli Britstein <elibr@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- drivers/net/mlx5/linux/mlx5_ethdev_os.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c index 0ee8c58ba7..4f3e790c0b 100644 --- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c +++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c @@ -671,7 +671,7 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ifr.ifr_data = (void *)ðpause; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - DRV_LOG(WARNING, + DRV_LOG(DEBUG, "port %u ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed:" " %s", dev->data->port_id, strerror(rte_errno)); -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.281996160 +0000 +++ 0023-net-mlx5-prevent-ioctl-failure-log-flooding.patch 2024-03-18 12:58:39.227347856 +0000 @@ -1 +1 @@ -From 84ba1440c5dff8d716c1a2643aa3eb5e806619ff Mon Sep 17 00:00:00 2001 +From 3b972375de7599a243be9094e0903d8ab29f2c1f Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 84ba1440c5dff8d716c1a2643aa3eb5e806619ff ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -25 +26 @@ -index e1bc3f7c2e..1f511d6e00 100644 +index 0ee8c58ba7..4f3e790c0b 100644
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/3be62ef2f36b7afedfd69ee2989cd5b6ae115208 Thanks. Luca Boccassi --- From 3be62ef2f36b7afedfd69ee2989cd5b6ae115208 Mon Sep 17 00:00:00 2001 From: Dariusz Sosnowski <dsosnowski@nvidia.com> Date: Wed, 6 Mar 2024 21:21:48 +0100 Subject: [PATCH] net/mlx5: fix template clean up of FDB control flow rule [ upstream commit 48db3b61c3b81c6efcd343b7929a000eb998cb0b ] This patch refactors the creation and clean up of templates used for FDB control flow rules, when HWS is enabled. All pattern and actions templates, and template tables are stored in a separate structure, `mlx5_flow_hw_ctrl_fdb`. It is allocated if and only if E-Switch is enabled. During HWS clean up, all of these templates are explicitly destroyed, instead of relying on templates general templates clean up. Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS") Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain") Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow.h | 19 +++ drivers/net/mlx5/mlx5_flow_hw.c | 255 ++++++++++++++++++-------------- 3 files changed, 166 insertions(+), 114 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9832b6df52..ca0e9ee647 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1739,11 +1739,7 @@ struct mlx5_priv { rte_spinlock_t hw_ctrl_lock; LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows; LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows; - struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; - struct rte_flow_template_table *hw_esw_sq_miss_tbl; - struct rte_flow_template_table *hw_esw_zero_tbl; - struct rte_flow_template_table *hw_tx_meta_cpy_tbl; - struct rte_flow_template_table *hw_lacp_rx_tbl; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; struct rte_flow_pattern_template *hw_tx_repr_tagging_pt; struct rte_flow_actions_template *hw_tx_repr_tagging_at; struct rte_flow_template_table *hw_tx_repr_tagging_tbl; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 42db9ba12a..9ce34ef556 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -2186,6 +2186,25 @@ struct mlx5_flow_hw_ctrl_rx { [MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX]; }; +/* Contains all templates required for control flow rules in FDB with HWS. */ +struct mlx5_flow_hw_ctrl_fdb { + struct rte_flow_pattern_template *esw_mgr_items_tmpl; + struct rte_flow_actions_template *regc_jump_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_root_tbl; + struct rte_flow_pattern_template *regc_sq_items_tmpl; + struct rte_flow_actions_template *port_actions_tmpl; + struct rte_flow_template_table *hw_esw_sq_miss_tbl; + struct rte_flow_pattern_template *port_items_tmpl; + struct rte_flow_actions_template *jump_one_actions_tmpl; + struct rte_flow_template_table *hw_esw_zero_tbl; + struct rte_flow_pattern_template *tx_meta_items_tmpl; + struct rte_flow_actions_template *tx_meta_actions_tmpl; + struct rte_flow_template_table *hw_tx_meta_cpy_tbl; + struct rte_flow_pattern_template *lacp_rx_items_tmpl; + struct rte_flow_actions_template *lacp_rx_actions_tmpl; + struct rte_flow_template_table *hw_lacp_rx_tbl; +}; + #define MLX5_CTRL_PROMISCUOUS (RTE_BIT32(0)) #define MLX5_CTRL_ALL_MULTICAST (RTE_BIT32(1)) #define MLX5_CTRL_BROADCAST (RTE_BIT32(2)) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 881aa40262..f4e125667f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -6287,6 +6287,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, error); } +/** + * Cleans up all template tables and pattern, and actions templates used for + * FDB control flow rules. + * + * @param dev + * Pointer to Ethernet device. + */ +static void +flow_hw_cleanup_ctrl_fdb_tables(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; + + if (!priv->hw_ctrl_fdb) + return; + hw_ctrl_fdb = priv->hw_ctrl_fdb; + /* Clean up templates used for LACP default miss table. */ + if (hw_ctrl_fdb->hw_lacp_rx_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_lacp_rx_tbl, NULL)); + if (hw_ctrl_fdb->lacp_rx_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->lacp_rx_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->lacp_rx_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + NULL)); + /* Clean up templates used for default Tx metadata copy. */ + if (hw_ctrl_fdb->hw_tx_meta_cpy_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_tx_meta_cpy_tbl, NULL)); + if (hw_ctrl_fdb->tx_meta_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->tx_meta_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->tx_meta_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->tx_meta_items_tmpl, + NULL)); + /* Clean up templates used for default FDB jump rule. */ + if (hw_ctrl_fdb->hw_esw_zero_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_zero_tbl, NULL)); + if (hw_ctrl_fdb->jump_one_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->jump_one_actions_tmpl, + NULL)); + if (hw_ctrl_fdb->port_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->port_items_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - non-root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_tbl, NULL)); + if (hw_ctrl_fdb->regc_sq_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->regc_sq_items_tmpl, + NULL)); + if (hw_ctrl_fdb->port_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->port_actions_tmpl, + NULL)); + /* Clean up templates used for default SQ miss flow rules - root table. */ + if (hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) + claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, NULL)); + if (hw_ctrl_fdb->regc_jump_actions_tmpl) + claim_zero(flow_hw_actions_template_destroy(dev, + hw_ctrl_fdb->regc_jump_actions_tmpl, NULL)); + if (hw_ctrl_fdb->esw_mgr_items_tmpl) + claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->esw_mgr_items_tmpl, + NULL)); + /* Clean up templates structure for FDB control flow rules. */ + mlx5_free(hw_ctrl_fdb); + priv->hw_ctrl_fdb = NULL; +} + /* * Create a table on the root group to for the LACP traffic redirecting. * @@ -6336,110 +6402,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, * @return * 0 on success, negative values otherwise */ -static __rte_unused int +static int flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL; - struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL; - struct rte_flow_pattern_template *port_items_tmpl = NULL; - struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL; - struct rte_flow_pattern_template *lacp_rx_items_tmpl = NULL; - struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL; - struct rte_flow_actions_template *port_actions_tmpl = NULL; - struct rte_flow_actions_template *jump_one_actions_tmpl = NULL; - struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL; - struct rte_flow_actions_template *lacp_rx_actions_tmpl = NULL; + struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb; uint32_t xmeta = priv->sh->config.dv_xmeta_en; uint32_t repr_matching = priv->sh->config.repr_matching; - int ret; + MLX5_ASSERT(priv->hw_ctrl_fdb == NULL); + hw_ctrl_fdb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hw_ctrl_fdb), 0, SOCKET_ID_ANY); + if (!hw_ctrl_fdb) { + DRV_LOG(ERR, "port %u failed to allocate memory for FDB control flow templates", + dev->data->port_id); + rte_errno = ENOMEM; + goto err; + } + priv->hw_ctrl_fdb = hw_ctrl_fdb; /* Create templates and table for default SQ miss flow rules - root table. */ - esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); - if (!esw_mgr_items_tmpl) { + hw_ctrl_fdb->esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error); + if (!hw_ctrl_fdb->esw_mgr_items_tmpl) { DRV_LOG(ERR, "port %u failed to create E-Switch Manager item" " template for control flows", dev->data->port_id); goto err; } - regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev, error); - if (!regc_jump_actions_tmpl) { + hw_ctrl_fdb->regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template + (dev, error); + if (!hw_ctrl_fdb->regc_jump_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL); - priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table - (dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_root_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table + (dev, hw_ctrl_fdb->esw_mgr_items_tmpl, hw_ctrl_fdb->regc_jump_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default SQ miss flow rules - non-root table. */ - regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); - if (!regc_sq_items_tmpl) { + hw_ctrl_fdb->regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error); + if (!hw_ctrl_fdb->regc_sq_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); - if (!port_actions_tmpl) { + hw_ctrl_fdb->port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error); + if (!hw_ctrl_fdb->port_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create port action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL); - priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl, - port_actions_tmpl, error); - if (!priv->hw_esw_sq_miss_tbl) { + hw_ctrl_fdb->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table + (dev, hw_ctrl_fdb->regc_sq_items_tmpl, hw_ctrl_fdb->port_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default FDB jump flow rules. */ - port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); - if (!port_items_tmpl) { + hw_ctrl_fdb->port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error); + if (!hw_ctrl_fdb->port_items_tmpl) { DRV_LOG(ERR, "port %u failed to create SQ item template for" " control flows", dev->data->port_id); goto err; } - jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template + hw_ctrl_fdb->jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template (dev, MLX5_HW_LOWEST_USABLE_GROUP, error); - if (!jump_one_actions_tmpl) { + if (!hw_ctrl_fdb->jump_one_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create jump action template" " for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL); - priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl, - jump_one_actions_tmpl, - error); - if (!priv->hw_esw_zero_tbl) { + hw_ctrl_fdb->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table + (dev, hw_ctrl_fdb->port_items_tmpl, hw_ctrl_fdb->jump_one_actions_tmpl, + error); + if (!hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "port %u failed to create table for default jump to group 1" " for control flows", dev->data->port_id); goto err; } /* Create templates and table for default Tx metadata copy flow rule. */ if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) { - tx_meta_items_tmpl = + hw_ctrl_fdb->tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev, error); - if (!tx_meta_items_tmpl) { + if (!hw_ctrl_fdb->tx_meta_items_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern" " template for control flows", dev->data->port_id); goto err; } - tx_meta_actions_tmpl = + hw_ctrl_fdb->tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev, error); - if (!tx_meta_actions_tmpl) { + if (!hw_ctrl_fdb->tx_meta_actions_tmpl) { DRV_LOG(ERR, "port %u failed to Tx metadata copy actions" " template for control flows", dev->data->port_id); goto err; } - MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL); - priv->hw_tx_meta_cpy_tbl = - flow_hw_create_tx_default_mreg_copy_table(dev, tx_meta_items_tmpl, - tx_meta_actions_tmpl, error); - if (!priv->hw_tx_meta_cpy_tbl) { + hw_ctrl_fdb->hw_tx_meta_cpy_tbl = + flow_hw_create_tx_default_mreg_copy_table + (dev, hw_ctrl_fdb->tx_meta_items_tmpl, + hw_ctrl_fdb->tx_meta_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_tx_meta_cpy_tbl) { DRV_LOG(ERR, "port %u failed to create table for default" " Tx metadata copy flow rule", dev->data->port_id); goto err; @@ -6447,71 +6512,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error } /* Create LACP default miss table. */ if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) { - lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error); - if (!lacp_rx_items_tmpl) { + hw_ctrl_fdb->lacp_rx_items_tmpl = + flow_hw_create_lacp_rx_pattern_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_items_tmpl) { DRV_LOG(ERR, "port %u failed to create pattern template" " for LACP Rx traffic", dev->data->port_id); goto err; } - lacp_rx_actions_tmpl = flow_hw_create_lacp_rx_actions_template(dev, error); - if (!lacp_rx_actions_tmpl) { + hw_ctrl_fdb->lacp_rx_actions_tmpl = + flow_hw_create_lacp_rx_actions_template(dev, error); + if (!hw_ctrl_fdb->lacp_rx_actions_tmpl) { DRV_LOG(ERR, "port %u failed to create actions template" " for LACP Rx traffic", dev->data->port_id); goto err; } - priv->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table(dev, lacp_rx_items_tmpl, - lacp_rx_actions_tmpl, error); - if (!priv->hw_lacp_rx_tbl) { + hw_ctrl_fdb->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table + (dev, hw_ctrl_fdb->lacp_rx_items_tmpl, + hw_ctrl_fdb->lacp_rx_actions_tmpl, error); + if (!hw_ctrl_fdb->hw_lacp_rx_tbl) { DRV_LOG(ERR, "port %u failed to create template table for" " for LACP Rx traffic", dev->data->port_id); goto err; } } return 0; + err: - /* Do not overwrite the rte_errno. */ - ret = -rte_errno; - if (ret == 0) - ret = rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to create control tables."); - if (priv->hw_tx_meta_cpy_tbl) { - flow_hw_table_destroy(dev, priv->hw_tx_meta_cpy_tbl, NULL); - priv->hw_tx_meta_cpy_tbl = NULL; - } - if (priv->hw_esw_zero_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL); - priv->hw_esw_zero_tbl = NULL; - } - if (priv->hw_esw_sq_miss_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL); - priv->hw_esw_sq_miss_tbl = NULL; - } - if (priv->hw_esw_sq_miss_root_tbl) { - flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL); - priv->hw_esw_sq_miss_root_tbl = NULL; - } - if (lacp_rx_actions_tmpl) - flow_hw_actions_template_destroy(dev, lacp_rx_actions_tmpl, NULL); - if (tx_meta_actions_tmpl) - flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL); - if (jump_one_actions_tmpl) - flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL); - if (port_actions_tmpl) - flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL); - if (regc_jump_actions_tmpl) - flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL); - if (lacp_rx_items_tmpl) - flow_hw_pattern_template_destroy(dev, lacp_rx_items_tmpl, NULL); - if (tx_meta_items_tmpl) - flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL); - if (port_items_tmpl) - flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL); - if (regc_sq_items_tmpl) - flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL); - if (esw_mgr_items_tmpl) - flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL); - return ret; + flow_hw_cleanup_ctrl_fdb_tables(dev); + return -EINVAL; } static void @@ -7308,6 +7336,7 @@ err: mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); priv->hws_cpool = NULL; } + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -7357,6 +7386,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) return; flow_hw_rxq_flag_set(dev, false); flow_hw_flush_all_ctrl_flows(dev); + flow_hw_cleanup_ctrl_fdb_tables(dev); flow_hw_cleanup_tx_repr_tagging(dev); flow_hw_cleanup_ctrl_rx_tables(dev); while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) { @@ -8958,8 +8988,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) { + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -8991,7 +9022,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool actions[2] = (struct rte_flow_action) { .type = RTE_FLOW_ACTION_TYPE_END, }; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d", @@ -9022,7 +9054,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool .type = RTE_FLOW_ACTION_TYPE_END, }; flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS; - ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl, + ret = flow_hw_create_ctrl_flow(dev, proxy_dev, + proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl, items, 0, actions, 0, &flow_info, external); if (ret) { DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d", @@ -9068,8 +9101,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) proxy_priv = proxy_dev->data->dev_private; if (!proxy_priv->dr_ctx) return 0; - if (!proxy_priv->hw_esw_sq_miss_root_tbl || - !proxy_priv->hw_esw_sq_miss_tbl) + if (!proxy_priv->hw_ctrl_fdb || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl || + !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) return 0; cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows); while (cf != NULL) { @@ -9136,7 +9170,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) proxy_port_id, port_id); return 0; } - if (!proxy_priv->hw_esw_zero_tbl) { + if (!proxy_priv->hw_ctrl_fdb || !proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl) { DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but " "default flow tables were not created.", proxy_port_id, port_id); @@ -9144,7 +9178,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) return -rte_errno; } return flow_hw_create_ctrl_flow(dev, proxy_dev, - proxy_priv->hw_esw_zero_tbl, + proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl, items, 0, actions, 0, &flow_info, false); } @@ -9196,10 +9230,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) }; MLX5_ASSERT(priv->master); - if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl) + if (!priv->dr_ctx || + !priv->hw_ctrl_fdb || + !priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl) return 0; return flow_hw_create_ctrl_flow(dev, dev, - priv->hw_tx_meta_cpy_tbl, + priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl, eth_all, 0, copy_reg_action, 0, &flow_info, false); } @@ -9291,10 +9327,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) .type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX, }; - if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl) + if (!priv->dr_ctx || !priv->hw_ctrl_fdb || !priv->hw_ctrl_fdb->hw_lacp_rx_tbl) return 0; - return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0, - miss_action, 0, &flow_info, false); + return flow_hw_create_ctrl_flow(dev, dev, + priv->hw_ctrl_fdb->hw_lacp_rx_tbl, + eth_lacp, 0, miss_action, 0, &flow_info, false); } static uint32_t -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.228549025 +0000 +++ 0022-net-mlx5-fix-template-clean-up-of-FDB-control-flow-r.patch 2024-03-18 12:58:39.223347746 +0000 @@ -1 +1 @@ -From 48db3b61c3b81c6efcd343b7929a000eb998cb0b Mon Sep 17 00:00:00 2001 +From 3be62ef2f36b7afedfd69ee2989cd5b6ae115208 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 48db3b61c3b81c6efcd343b7929a000eb998cb0b ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -27 +28 @@ -index 6ff8f322e0..0091a2459c 100644 +index 9832b6df52..ca0e9ee647 100644 @@ -30 +31 @@ -@@ -1894,11 +1894,7 @@ struct mlx5_priv { +@@ -1739,11 +1739,7 @@ struct mlx5_priv { @@ -44 +45 @@ -index ff3830a888..34b5e0f45b 100644 +index 42db9ba12a..9ce34ef556 100644 @@ -47 +48 @@ -@@ -2775,6 +2775,25 @@ struct mlx5_flow_hw_ctrl_rx { +@@ -2186,6 +2186,25 @@ struct mlx5_flow_hw_ctrl_rx { @@ -74 +75 @@ -index a96c829045..feeb071b4b 100644 +index 881aa40262..f4e125667f 100644 @@ -77 +78 @@ -@@ -9363,6 +9363,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, +@@ -6287,6 +6287,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev, @@ -150 +151 @@ -@@ -9412,110 +9478,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, +@@ -6336,110 +6402,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev, @@ -306 +307 @@ -@@ -9523,71 +9588,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error +@@ -6447,71 +6512,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error @@ -391,4 +392,4 @@ -@@ -10619,6 +10647,7 @@ err: - action_template_drop_release(dev); - mlx5_flow_quota_destroy(dev); - flow_hw_destroy_send_to_kernel_action(priv); +@@ -7308,6 +7336,7 @@ err: + mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); + priv->hws_cpool = NULL; + } @@ -399,2 +400,2 @@ -@@ -10681,6 +10710,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) - dev->flow_fp_ops = &rte_flow_fp_default_ops; +@@ -7357,6 +7386,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) + return; @@ -406,2 +407,2 @@ - action_template_drop_release(dev); -@@ -13259,8 +13289,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool + while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) { +@@ -8958,8 +8988,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool @@ -419 +420 @@ -@@ -13292,7 +13323,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool +@@ -8991,7 +9022,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool @@ -429 +430 @@ -@@ -13323,7 +13355,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool +@@ -9022,7 +9054,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool @@ -439 +440 @@ -@@ -13369,8 +13402,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) +@@ -9068,8 +9101,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn) @@ -451 +452 @@ -@@ -13437,7 +13471,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +@@ -9136,7 +9170,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) @@ -460 +461 @@ -@@ -13445,7 +13479,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) +@@ -9144,7 +9178,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev) @@ -469 +470 @@ -@@ -13497,10 +13531,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) +@@ -9196,10 +9230,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev) @@ -484 +485 @@ -@@ -13592,10 +13628,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev) +@@ -9291,10 +9327,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/7858502f9ae5ff1df72fd9ccf4ff12d8a57bbf38 Thanks. Luca Boccassi --- From 7858502f9ae5ff1df72fd9ccf4ff12d8a57bbf38 Mon Sep 17 00:00:00 2001 From: Maayan Kashani <mkashani@nvidia.com> Date: Wed, 6 Mar 2024 08:02:07 +0200 Subject: [PATCH] net/mlx5: fix DR context release ordering [ upstream commit d068681b637da6b7857c13711eb1a675b2a341e3 ] Creating rules on group >0, creates a jump action on the group table. Non template code releases the group data under shared mlx5dr free code, And the mlx5dr context was already closed in HWS code. Remove mlx5dr context release from hws resource release function. Fixes: b401400db24e ("net/mlx5: add port flow configuration") Signed-off-by: Maayan Kashani <mkashani@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/mlx5.c | 7 +++++++ drivers/net/mlx5/mlx5_flow_hw.c | 2 -- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d41b0d1363..4d76da484b 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2058,6 +2058,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_flex_item_port_cleanup(dev); #ifdef HAVE_MLX5_HWS_SUPPORT flow_hw_destroy_vport_action(dev); + /* dr context will be closed after mlx5_os_free_shared_dr. */ flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); if (priv->sh->config.dv_flow_en == 2) { @@ -2093,6 +2094,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_hlist_destroy(priv->mreg_cp_tbl); mlx5_mprq_free_mp(dev); mlx5_os_free_shared_dr(priv); +#ifdef HAVE_MLX5_HWS_SUPPORT + if (priv->dr_ctx) { + claim_zero(mlx5dr_context_close(priv->dr_ctx)); + priv->dr_ctx = NULL; + } +#endif if (priv->rss_conf.rss_key != NULL) mlx5_free(priv->rss_conf.rss_key); if (priv->reta_idx != NULL) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 927be86c36..881aa40262 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -7407,8 +7407,6 @@ flow_hw_resource_release(struct rte_eth_dev *dev) } mlx5_free(priv->hw_q); priv->hw_q = NULL; - claim_zero(mlx5dr_context_close(priv->dr_ctx)); - priv->dr_ctx = NULL; priv->nb_queue = 0; } -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.176495177 +0000 +++ 0021-net-mlx5-fix-DR-context-release-ordering.patch 2024-03-18 12:58:39.211347416 +0000 @@ -1 +1 @@ -From d068681b637da6b7857c13711eb1a675b2a341e3 Mon Sep 17 00:00:00 2001 +From 7858502f9ae5ff1df72fd9ccf4ff12d8a57bbf38 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit d068681b637da6b7857c13711eb1a675b2a341e3 ] + @@ -13 +14,0 @@ -Cc: stable@dpdk.org @@ -23 +24 @@ -index 39dc1830d1..8b54843a43 100644 +index d41b0d1363..4d76da484b 100644 @@ -26,2 +27,2 @@ -@@ -2355,6 +2355,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) - mlx5_indirect_list_handles_release(dev); +@@ -2058,6 +2058,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) + mlx5_flex_item_port_cleanup(dev); @@ -33,2 +34,2 @@ - if (priv->tlv_options != NULL) { -@@ -2391,6 +2392,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) + if (priv->sh->config.dv_flow_en == 2) { +@@ -2093,6 +2094,12 @@ mlx5_dev_close(struct rte_eth_dev *dev) @@ -48 +49 @@ -index 817461017f..c89bd00fb0 100644 +index 927be86c36..881aa40262 100644 @@ -51 +52 @@ -@@ -10734,13 +10734,11 @@ flow_hw_resource_release(struct rte_eth_dev *dev) +@@ -7407,8 +7407,6 @@ flow_hw_resource_release(struct rte_eth_dev *dev) @@ -56,5 +56,0 @@ - if (priv->shared_host) { - struct mlx5_priv *host_priv = priv->shared_host->data->dev_private; - __atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED); - priv->shared_host = NULL; - } @@ -62,2 +57,0 @@ - mlx5_free(priv->hw_attr); - priv->hw_attr = NULL; @@ -64,0 +59,2 @@ + } +
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/03243a27739ee97b02d23050b4a6245c239428a1 Thanks. Luca Boccassi --- From 03243a27739ee97b02d23050b4a6245c239428a1 Mon Sep 17 00:00:00 2001 From: Gregory Etelson <getelson@nvidia.com> Date: Thu, 29 Feb 2024 18:05:03 +0200 Subject: [PATCH] net/mlx5: remove duplication of L3 flow item validation [ upstream commit 27e44a6f53eccc7d2ce80f6466fa214158f0ee81 ] Remove code duplications in DV L3 items validation translation. Fixes: 3193c2494eea ("net/mlx5: fix L4 protocol validation") Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 151 +++++++++----------------------- 1 file changed, 43 insertions(+), 108 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f5f33a9eca..a4fca70b07 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -6971,6 +6971,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev, return 0; } +static __rte_always_inline uint8_t +mlx5_flow_l3_next_protocol(const struct rte_flow_item *l3_item, + enum MLX5_SET_MATCHER key_type) +{ +#define MLX5_L3_NEXT_PROTOCOL(i, ms) \ + ((i)->type == RTE_FLOW_ITEM_TYPE_IPV4 ? \ + ((const struct rte_flow_item_ipv4 *)(i)->ms)->hdr.next_proto_id : \ + (i)->type == RTE_FLOW_ITEM_TYPE_IPV6 ? \ + ((const struct rte_flow_item_ipv6 *)(i)->ms)->hdr.proto : \ + (i)->type == RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT ? \ + ((const struct rte_flow_item_ipv6_frag_ext *)(i)->ms)->hdr.next_header :\ + 0xff) + + uint8_t next_protocol; + + if (l3_item->mask != NULL && l3_item->spec != NULL) { + next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, mask); + if (next_protocol) + next_protocol &= MLX5_L3_NEXT_PROTOCOL(l3_item, spec); + else + next_protocol = 0xff; + } else if (key_type == MLX5_SET_MATCHER_HS_M && l3_item->mask != NULL) { + next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, mask); + } else if (key_type == MLX5_SET_MATCHER_HS_V && l3_item->spec != NULL) { + next_protocol = MLX5_L3_NEXT_PROTOCOL(l3_item, spec); + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + return next_protocol; + +#undef MLX5_L3_NEXT_PROTOCOL +} + /** * Internal validation function. For validating both actions and items. * @@ -7194,19 +7228,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); break; case RTE_FLOW_ITEM_TYPE_IPV6: mlx5_flow_tunnel_ip_check(items, next_protocol, @@ -7220,22 +7243,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - item_ipv6_proto = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); break; case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: ret = flow_dv_validate_item_ipv6_frag_ext(items, @@ -7246,19 +7255,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); break; case RTE_FLOW_ITEM_TYPE_TCP: ret = mlx5_flow_validate_item_tcp @@ -13249,28 +13247,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev, wks->priority = MLX5_PRIORITY_MAP_L3; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - items->spec != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else if (key_type == MLX5_SET_MATCHER_HS_M && - items->mask != NULL) { - next_protocol = ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else if (key_type == MLX5_SET_MATCHER_HS_V && - items->spec != NULL) { - next_protocol = ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); break; case RTE_FLOW_ITEM_TYPE_IPV6: mlx5_flow_tunnel_ip_check(items, next_protocol, @@ -13280,56 +13257,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev, wks->priority = MLX5_PRIORITY_MAP_L3; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - items->spec != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else if (key_type == MLX5_SET_MATCHER_HS_M && - items->mask != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6 *) - (items->mask))->hdr.proto; - } else if (key_type == MLX5_SET_MATCHER_HS_V && - items->spec != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6 *) - (items->spec))->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); break; case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: flow_dv_translate_item_ipv6_frag_ext (key, items, tunnel, key_type); last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - items->spec != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else if (key_type == MLX5_SET_MATCHER_HS_M && - items->mask != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) - (items->mask))->hdr.next_header; - } else if (key_type == MLX5_SET_MATCHER_HS_V && - items->spec != NULL) { - next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) - (items->spec))->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); break; case RTE_FLOW_ITEM_TYPE_TCP: flow_dv_translate_item_tcp(key, items, tunnel, key_type); -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.069485295 +0000 +++ 0019-net-mlx5-remove-duplication-of-L3-flow-item-validati.patch 2024-03-18 12:58:39.183346646 +0000 @@ -1 +1 @@ -From 27e44a6f53eccc7d2ce80f6466fa214158f0ee81 Mon Sep 17 00:00:00 2001 +From 03243a27739ee97b02d23050b4a6245c239428a1 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 27e44a6f53eccc7d2ce80f6466fa214158f0ee81 ] + @@ -9 +10,0 @@ -Cc: stable@dpdk.org @@ -18 +19 @@ -index f1584ed6e0..9e444c8a1c 100644 +index f5f33a9eca..a4fca70b07 100644 @@ -21 +22 @@ -@@ -7488,6 +7488,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev, +@@ -6971,6 +6971,40 @@ flow_dv_validate_item_flex(struct rte_eth_dev *dev, @@ -60 +61 @@ - * Validate IB BTH item. + * Internal validation function. For validating both actions and items. @@ -62 +63 @@ -@@ -7770,19 +7804,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7194,19 +7228,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -84 +85 @@ -@@ -7796,22 +7819,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7220,22 +7243,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -109 +110 @@ -@@ -7822,19 +7831,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7246,19 +7255,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -131 +132 @@ -@@ -13997,28 +13995,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev, +@@ -13249,28 +13247,7 @@ flow_dv_translate_items(struct rte_eth_dev *dev, @@ -161 +162 @@ -@@ -14028,56 +14005,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev, +@@ -13280,56 +13257,14 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/fd3721be47bb39a9b67467e97a848d4b493bd1bd Thanks. Luca Boccassi --- From fd3721be47bb39a9b67467e97a848d4b493bd1bd Mon Sep 17 00:00:00 2001 From: Gregory Etelson <getelson@nvidia.com> Date: Thu, 29 Feb 2024 18:05:04 +0200 Subject: [PATCH] net/mlx5: fix IP-in-IP tunnels recognition [ upstream commit 2db234e769e121446b7b6d8e97e00212bebf7a3c ] The patch fixes IP-in-IP tunnel recognition for the following patterns / [ipv4|ipv6] proto is [ipv4|ipv6] / end / [ipv4|ipv6] / [ipv4|ipv6] / Fixes: 3d69434113d1 ("net/mlx5: add Direct Verbs validation function") Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 104 ++++++++++++++++++++++++-------- 1 file changed, 80 insertions(+), 24 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a4fca70b07..68d3ee0c36 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -267,21 +267,41 @@ struct field_modify_info modify_tcp[] = { {0, 0, 0}, }; -static void +enum mlx5_l3_tunnel_detection { + l3_tunnel_none, + l3_tunnel_outer, + l3_tunnel_inner +}; + +static enum mlx5_l3_tunnel_detection mlx5_flow_tunnel_ip_check(const struct rte_flow_item *item __rte_unused, - uint8_t next_protocol, uint64_t *item_flags, - int *tunnel) + uint8_t next_protocol, uint64_t item_flags, + uint64_t *l3_tunnel_flag) { + enum mlx5_l3_tunnel_detection td = l3_tunnel_none; + MLX5_ASSERT(item->type == RTE_FLOW_ITEM_TYPE_IPV4 || item->type == RTE_FLOW_ITEM_TYPE_IPV6); - if (next_protocol == IPPROTO_IPIP) { - *item_flags |= MLX5_FLOW_LAYER_IPIP; - *tunnel = 1; - } - if (next_protocol == IPPROTO_IPV6) { - *item_flags |= MLX5_FLOW_LAYER_IPV6_ENCAP; - *tunnel = 1; + if ((item_flags & MLX5_FLOW_LAYER_OUTER_L3) == 0) { + switch (next_protocol) { + case IPPROTO_IPIP: + td = l3_tunnel_outer; + *l3_tunnel_flag = MLX5_FLOW_LAYER_IPIP; + break; + case IPPROTO_IPV6: + td = l3_tunnel_outer; + *l3_tunnel_flag = MLX5_FLOW_LAYER_IPV6_ENCAP; + break; + default: + break; + } + } else { + td = l3_tunnel_inner; + *l3_tunnel_flag = item->type == RTE_FLOW_ITEM_TYPE_IPV4 ? + MLX5_FLOW_LAYER_IPIP : + MLX5_FLOW_LAYER_IPV6_ENCAP; } + return td; } static inline struct mlx5_hlist * @@ -7142,6 +7162,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; is_root = (uint64_t)ret; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + enum mlx5_l3_tunnel_detection l3_tunnel_detection; + uint64_t l3_tunnel_flag; int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); int type = items->type; @@ -7219,8 +7241,16 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, vlan_m = items->mask; break; case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); + l3_tunnel_detection = + mlx5_flow_tunnel_ip_check(items, next_protocol, + item_flags, + &l3_tunnel_flag); + if (l3_tunnel_detection == l3_tunnel_inner) { + item_flags |= l3_tunnel_flag; + tunnel = 1; + } ret = flow_dv_validate_item_ipv4(dev, items, item_flags, last_item, ether_type, error); @@ -7228,12 +7258,20 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : MLX5_FLOW_LAYER_OUTER_L3_IPV4; - next_protocol = mlx5_flow_l3_next_protocol - (items, (enum MLX5_SET_MATCHER)-1); + if (l3_tunnel_detection == l3_tunnel_outer) + item_flags |= l3_tunnel_flag; break; case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); + next_protocol = mlx5_flow_l3_next_protocol + (items, (enum MLX5_SET_MATCHER)-1); + l3_tunnel_detection = + mlx5_flow_tunnel_ip_check(items, next_protocol, + item_flags, + &l3_tunnel_flag); + if (l3_tunnel_detection == l3_tunnel_inner) { + item_flags |= l3_tunnel_flag; + tunnel = 1; + } ret = mlx5_flow_validate_item_ipv6(items, item_flags, last_item, ether_type, @@ -7243,8 +7281,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : MLX5_FLOW_LAYER_OUTER_L3_IPV6; - next_protocol = mlx5_flow_l3_next_protocol - (items, (enum MLX5_SET_MATCHER)-1); + if (l3_tunnel_detection == l3_tunnel_outer) + item_flags |= l3_tunnel_flag; break; case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: ret = flow_dv_validate_item_ipv6_frag_ext(items, @@ -13197,6 +13235,8 @@ flow_dv_translate_items(struct rte_eth_dev *dev, int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); int item_type = items->type; uint64_t last_item = wks->last_item; + enum mlx5_l3_tunnel_detection l3_tunnel_detection; + uint64_t l3_tunnel_flag; int ret; switch (item_type) { @@ -13240,24 +13280,40 @@ flow_dv_translate_items(struct rte_eth_dev *dev, MLX5_FLOW_LAYER_OUTER_VLAN); break; case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &wks->item_flags, &tunnel); + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); + l3_tunnel_detection = + mlx5_flow_tunnel_ip_check(items, next_protocol, + wks->item_flags, + &l3_tunnel_flag); + if (l3_tunnel_detection == l3_tunnel_inner) { + wks->item_flags |= l3_tunnel_flag; + tunnel = 1; + } flow_dv_translate_item_ipv4(key, items, tunnel, wks->group, key_type); wks->priority = MLX5_PRIORITY_MAP_L3; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : MLX5_FLOW_LAYER_OUTER_L3_IPV4; - next_protocol = mlx5_flow_l3_next_protocol(items, key_type); + if (l3_tunnel_detection == l3_tunnel_outer) + wks->item_flags |= l3_tunnel_flag; break; case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &wks->item_flags, &tunnel); + next_protocol = mlx5_flow_l3_next_protocol(items, key_type); + l3_tunnel_detection = + mlx5_flow_tunnel_ip_check(items, next_protocol, + wks->item_flags, + &l3_tunnel_flag); + if (l3_tunnel_detection == l3_tunnel_inner) { + wks->item_flags |= l3_tunnel_flag; + tunnel = 1; + } flow_dv_translate_item_ipv6(key, items, tunnel, wks->group, key_type); wks->priority = MLX5_PRIORITY_MAP_L3; last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : MLX5_FLOW_LAYER_OUTER_L3_IPV6; - next_protocol = mlx5_flow_l3_next_protocol(items, key_type); + if (l3_tunnel_detection == l3_tunnel_outer) + wks->item_flags |= l3_tunnel_flag; break; case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: flow_dv_translate_item_ipv6_frag_ext -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.123113801 +0000 +++ 0020-net-mlx5-fix-IP-in-IP-tunnels-recognition.patch 2024-03-18 12:58:39.199347086 +0000 @@ -1 +1 @@ -From 2db234e769e121446b7b6d8e97e00212bebf7a3c Mon Sep 17 00:00:00 2001 +From fd3721be47bb39a9b67467e97a848d4b493bd1bd Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 2db234e769e121446b7b6d8e97e00212bebf7a3c ] + @@ -13 +14,0 @@ -Cc: stable@dpdk.org @@ -22 +23 @@ -index 9e444c8a1c..80239bebee 100644 +index a4fca70b07..68d3ee0c36 100644 @@ -25 +26 @@ -@@ -275,21 +275,41 @@ struct field_modify_info modify_tcp[] = { +@@ -267,21 +267,41 @@ struct field_modify_info modify_tcp[] = { @@ -77 +78 @@ -@@ -7718,6 +7738,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7142,6 +7162,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -86 +87 @@ -@@ -7795,8 +7817,16 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7219,8 +7241,16 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -105 +106 @@ -@@ -7804,12 +7834,20 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7228,12 +7258,20 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -130 +131 @@ -@@ -7819,8 +7857,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, +@@ -7243,8 +7281,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, @@ -141 +142 @@ -@@ -13945,6 +13983,8 @@ flow_dv_translate_items(struct rte_eth_dev *dev, +@@ -13197,6 +13235,8 @@ flow_dv_translate_items(struct rte_eth_dev *dev, @@ -150 +151 @@ -@@ -13988,24 +14028,40 @@ flow_dv_translate_items(struct rte_eth_dev *dev, +@@ -13240,24 +13280,40 @@ flow_dv_translate_items(struct rte_eth_dev *dev,
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/f5ff0aaf2e79c551419f48ad0b7dfa4bf20bd00c Thanks. Luca Boccassi --- From f5ff0aaf2e79c551419f48ad0b7dfa4bf20bd00c Mon Sep 17 00:00:00 2001 From: Gregory Etelson <getelson@nvidia.com> Date: Fri, 1 Mar 2024 08:04:48 +0200 Subject: [PATCH] net/mlx5: fix VLAN ID in flow modify [ upstream commit b89bfdd9be845b7ecfd50d2e9ec77f5cc2ccf94d ] PMD uses MODIFY_FIELD to implement standalone OF_SET_VLAN_VID flow action. PMD assigned immediate VLAN Id value to the `.level` member of the `rte_flow_action_modify_data` structure instead of `.value`. That assignment has worked because both members had the same offset in the hosting structure. The patch assigns VLAN Id directly to `.value` Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS") Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/mlx5_flow_hw.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index bb4693c2b4..927be86c36 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -4316,7 +4316,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, rm[set_vlan_vid_ix].conf)->vlan_vid != 0); const struct rte_flow_action_of_set_vlan_vid *conf = ra[set_vlan_vid_ix].conf; - rte_be16_t vid = masked ? conf->vlan_vid : 0; int width = mlx5_flow_item_field_width(dev, RTE_FLOW_FIELD_VLAN_ID, 0, NULL, &error); *spec = (typeof(*spec)) { @@ -4327,8 +4326,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, }, .src = { .field = RTE_FLOW_FIELD_VALUE, - .level = vid, - .offset = 0, }, .width = width, }; @@ -4340,11 +4337,15 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, }, .src = { .field = RTE_FLOW_FIELD_VALUE, - .level = masked ? (1U << width) - 1 : 0, - .offset = 0, }, .width = 0xffffffff, }; + if (masked) { + uint32_t mask_val = 0xffffffff; + + rte_memcpy(spec->src.value, &conf->vlan_vid, sizeof(conf->vlan_vid)); + rte_memcpy(mask->src.value, &mask_val, sizeof(mask_val)); + } ra[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD; ra[set_vlan_vid_ix].conf = spec; rm[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD; @@ -4371,8 +4372,6 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, }, .src = { .field = RTE_FLOW_FIELD_VALUE, - .level = vid, - .offset = 0, }, .width = width, }; @@ -4381,6 +4380,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, .conf = &conf }; + rte_memcpy(conf.src.value, &vid, sizeof(vid)); return flow_hw_modify_field_construct(job, act_data, hw_acts, &modify_action); } -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:39.964237119 +0000 +++ 0017-net-mlx5-fix-VLAN-ID-in-flow-modify.patch 2024-03-18 12:58:39.143345546 +0000 @@ -1 +1 @@ -From b89bfdd9be845b7ecfd50d2e9ec77f5cc2ccf94d Mon Sep 17 00:00:00 2001 +From f5ff0aaf2e79c551419f48ad0b7dfa4bf20bd00c Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit b89bfdd9be845b7ecfd50d2e9ec77f5cc2ccf94d ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -25 +26 @@ -index a4e204695e..658f5daf82 100644 +index bb4693c2b4..927be86c36 100644 @@ -28 +29 @@ -@@ -6858,7 +6858,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, +@@ -4316,7 +4316,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, @@ -36 +37 @@ -@@ -6869,8 +6868,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, +@@ -4327,8 +4326,6 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, @@ -45 +46 @@ -@@ -6882,11 +6879,15 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, +@@ -4340,11 +4337,15 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev, @@ -63 +64 @@ -@@ -6913,8 +6914,6 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, +@@ -4371,8 +4372,6 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, @@ -72 +73 @@ -@@ -6923,6 +6922,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, +@@ -4381,6 +4380,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, @@ -77 +78,2 @@ - return flow_hw_modify_field_construct(mhdr_cmd, act_data, hw_acts, &modify_action); + return flow_hw_modify_field_construct(job, act_data, hw_acts, + &modify_action); @@ -79 +80,0 @@ -
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/bad500334cc885e74719dd772ba177e3d2261c83 Thanks. Luca Boccassi --- From bad500334cc885e74719dd772ba177e3d2261c83 Mon Sep 17 00:00:00 2001 From: Shun Hao <shunh@nvidia.com> Date: Fri, 1 Mar 2024 10:46:05 +0200 Subject: [PATCH] net/mlx5: fix meter policy priority [ upstream commit 1cfb78d2c40e3b3cf1bad061f21f306272fffd47 ] Currently a meter policy's flows are always using the same priority for all colors, so the red color flow might be before green/yellow ones. This will impact the performance cause green/yellow packets will check red flow first and got miss, then match green/yellow flows, introducing more hops. This patch fixes this by giving the same priority to flows for all colors. Fixes: 363db9b00f ("net/mlx5: handle yellow case in default meter policy") Signed-off-by: Shun Hao <shunh@nvidia.com> Acked-by: Bing Zhao <bingz@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 41 +++++++++++++++++++-------------- 1 file changed, 24 insertions(+), 17 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7ed04bdb15..f5f33a9eca 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -17146,9 +17146,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); - if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, - (enum modify_reg)color_reg_c_idx, color_mask, color_mask); + flow_dv_match_meta_reg(matcher.mask.buf, + (enum modify_reg)color_reg_c_idx, color_mask, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, matcher.mask.size); @@ -17199,7 +17198,6 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, int i; int ret = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, &flow_err); struct mlx5_sub_policy_color_rule *color_rule; - bool svport_match; struct mlx5_sub_policy_color_rule *tmp_rules[RTE_COLORS] = {NULL}; if (ret < 0) @@ -17235,10 +17233,9 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, /* No use. */ attr.priority = i; /* Create matchers for colors. */ - svport_match = (i != RTE_COLOR_RED) ? match_src_port : false; if (__flow_dv_create_policy_matcher(dev, color_reg_c_idx, MLX5_MTR_POLICY_MATCHER_PRIO, sub_policy, - &attr, svport_match, NULL, + &attr, match_src_port, NULL, &color_rule->matcher, &flow_err)) { DRV_LOG(ERR, "Failed to create color%u matcher.", i); goto err_exit; @@ -17248,7 +17245,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, color_reg_c_idx, (enum rte_color)i, color_rule->matcher, acts[i].actions_n, acts[i].dv_actions, - svport_match, NULL, &color_rule->rule, + match_src_port, NULL, &color_rule->rule, &attr)) { DRV_LOG(ERR, "Failed to create color%u rule.", i); goto err_exit; @@ -18131,7 +18128,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, struct { struct mlx5_flow_meter_policy *fm_policy; struct mlx5_flow_meter_info *next_fm; - struct mlx5_sub_policy_color_rule *tag_rule[MLX5_MTR_RTE_COLORS]; + struct mlx5_sub_policy_color_rule *tag_rule[RTE_COLORS]; } fm_info[MLX5_MTR_CHAIN_MAX_NUM] = { {0} }; uint32_t fm_cnt = 0; uint32_t i, j; @@ -18165,14 +18162,22 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, mtr_policy = fm_info[i].fm_policy; rte_spinlock_lock(&mtr_policy->sl); sub_policy = mtr_policy->sub_policys[domain][0]; - for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) { + for (j = 0; j < RTE_COLORS; j++) { uint8_t act_n = 0; - struct mlx5_flow_dv_modify_hdr_resource *modify_hdr; + struct mlx5_flow_dv_modify_hdr_resource *modify_hdr = NULL; struct mlx5_flow_dv_port_id_action_resource *port_action; + uint8_t fate_action; - if (mtr_policy->act_cnt[j].fate_action != MLX5_FLOW_FATE_MTR && - mtr_policy->act_cnt[j].fate_action != MLX5_FLOW_FATE_PORT_ID) - continue; + if (j == RTE_COLOR_RED) { + fate_action = MLX5_FLOW_FATE_DROP; + } else { + fate_action = mtr_policy->act_cnt[j].fate_action; + modify_hdr = mtr_policy->act_cnt[j].modify_hdr; + if (fate_action != MLX5_FLOW_FATE_MTR && + fate_action != MLX5_FLOW_FATE_PORT_ID && + fate_action != MLX5_FLOW_FATE_DROP) + continue; + } color_rule = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_sub_policy_color_rule), 0, SOCKET_ID_ANY); @@ -18184,9 +18189,8 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, goto err_exit; } color_rule->src_port = src_port; - modify_hdr = mtr_policy->act_cnt[j].modify_hdr; /* Prepare to create color rule. */ - if (mtr_policy->act_cnt[j].fate_action == MLX5_FLOW_FATE_MTR) { + if (fate_action == MLX5_FLOW_FATE_MTR) { next_fm = fm_info[i].next_fm; if (mlx5_flow_meter_attach(priv, next_fm, &attr, error)) { mlx5_free(color_rule); @@ -18213,7 +18217,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, } acts.dv_actions[act_n++] = tbl_data->jump.action; acts.actions_n = act_n; - } else { + } else if (fate_action == MLX5_FLOW_FATE_PORT_ID) { port_action = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID], mtr_policy->act_cnt[j].rix_port_id_action); @@ -18226,6 +18230,9 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, acts.dv_actions[act_n++] = modify_hdr->action; acts.dv_actions[act_n++] = port_action->action; acts.actions_n = act_n; + } else { + acts.dv_actions[act_n++] = mtr_policy->dr_drop_action[domain]; + acts.actions_n = act_n; } fm_info[i].tag_rule[j] = color_rule; TAILQ_INSERT_TAIL(&sub_policy->color_rules[j], color_rule, next_port); @@ -18257,7 +18264,7 @@ err_exit: mtr_policy = fm_info[i].fm_policy; rte_spinlock_lock(&mtr_policy->sl); sub_policy = mtr_policy->sub_policys[domain][0]; - for (j = 0; j < MLX5_MTR_RTE_COLORS; j++) { + for (j = 0; j < RTE_COLORS; j++) { color_rule = fm_info[i].tag_rule[j]; if (!color_rule) continue; -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:40.014648271 +0000 +++ 0018-net-mlx5-fix-meter-policy-priority.patch 2024-03-18 12:58:39.163346097 +0000 @@ -1 +1 @@ -From 1cfb78d2c40e3b3cf1bad061f21f306272fffd47 Mon Sep 17 00:00:00 2001 +From bad500334cc885e74719dd772ba177e3d2261c83 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 1cfb78d2c40e3b3cf1bad061f21f306272fffd47 ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -26 +27 @@ -index 18f09b22be..f1584ed6e0 100644 +index 7ed04bdb15..f5f33a9eca 100644 @@ -29 +30 @@ -@@ -17922,9 +17922,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, +@@ -17146,9 +17146,8 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, @@ -41 +42 @@ -@@ -17975,7 +17974,6 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, +@@ -17199,7 +17198,6 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, @@ -49 +50 @@ -@@ -18011,10 +18009,9 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, +@@ -17235,10 +17233,9 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, @@ -61 +62 @@ -@@ -18024,7 +18021,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, +@@ -17248,7 +17245,7 @@ __flow_dv_create_domain_policy_rules(struct rte_eth_dev *dev, @@ -70 +71 @@ -@@ -18907,7 +18904,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, +@@ -18131,7 +18128,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, @@ -79 +80 @@ -@@ -18941,14 +18938,22 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, +@@ -18165,14 +18162,22 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, @@ -107 +108 @@ -@@ -18960,9 +18965,8 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, +@@ -18184,9 +18189,8 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, @@ -118 +119 @@ -@@ -18989,7 +18993,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, +@@ -18213,7 +18217,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, @@ -127 +128 @@ -@@ -19002,6 +19006,9 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, +@@ -18226,6 +18230,9 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, @@ -137 +138 @@ -@@ -19033,7 +19040,7 @@ err_exit: +@@ -18257,7 +18264,7 @@ err_exit:
Hi, FYI, your patch has been queued to stable release 22.11.5 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 03/20/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/bluca/dpdk-stable This queued commit can be viewed at: https://github.com/bluca/dpdk-stable/commit/78b059d7594359a43069c2ba643393bbae73c6e1 Thanks. Luca Boccassi --- From 78b059d7594359a43069c2ba643393bbae73c6e1 Mon Sep 17 00:00:00 2001 From: Ali Alnubani <alialnu@nvidia.com> Date: Thu, 29 Feb 2024 18:45:26 +0200 Subject: [PATCH] doc: update link to Windows DevX in mlx5 guide [ upstream commit 5ddc8269192ca7aeec0bf903704c0385ebbd9e87 ] The older link no longer works. Signed-off-by: Ali Alnubani <alialnu@nvidia.com> Acked-by: Tal Shnaiderman <talshn@nvidia.com> --- doc/guides/platform/mlx5.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst index 3cc1dd29e2..a8dcba9683 100644 --- a/doc/guides/platform/mlx5.rst +++ b/doc/guides/platform/mlx5.rst @@ -228,7 +228,7 @@ DevX SDK Installation The DevX SDK must be installed on the machine building the Windows PMD. Additional information can be found at `How to Integrate Windows DevX in Your Development Environment -<https://docs.nvidia.com/networking/display/winof2v260/RShim+Drivers+and+Usage#RShimDriversandUsage-DevXInterface>`_. +<https://docs.nvidia.com/networking/display/winof2v290/devx+interface>`_. The minimal supported WinOF2 version is 2.60. -- 2.39.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-03-18 12:58:39.925298996 +0000 +++ 0016-doc-update-link-to-Windows-DevX-in-mlx5-guide.patch 2024-03-18 12:58:39.131345217 +0000 @@ -1 +1 @@ -From 5ddc8269192ca7aeec0bf903704c0385ebbd9e87 Mon Sep 17 00:00:00 2001 +From 78b059d7594359a43069c2ba643393bbae73c6e1 Mon Sep 17 00:00:00 2001 @@ -6 +6 @@ -The older link no longer works. +[ upstream commit 5ddc8269192ca7aeec0bf903704c0385ebbd9e87 ] @@ -8 +8 @@ -Cc: stable@dpdk.org +The older link no longer works. @@ -17 +17 @@ -index a66cf778d1..e9a1f52aca 100644 +index 3cc1dd29e2..a8dcba9683 100644 @@ -20 +20 @@ -@@ -230,7 +230,7 @@ DevX SDK Installation +@@ -228,7 +228,7 @@ DevX SDK Installation