* [PATCH 0/5] net/hns3: fix and refactor mailbox code @ 2023-11-08 3:44 Jie Hai 2023-11-08 3:44 ` [PATCH 1/5] net/hns3: fix sync mailbox failure forever Jie Hai ` (8 more replies) 0 siblings, 9 replies; 33+ messages in thread From: Jie Hai @ 2023-11-08 3:44 UTC (permalink / raw) To: dev This patchset fixes failure on sync mailbox and refactors some codes on mailbox. Dengdui Huang (5): net/hns3: fix sync mailbox failure forever net/hns3: refactor VF mailbox message struct net/hns3: refactor PF mailbox message struct net/hns3: refactor send mailbox function net/hns3: refactor handle mailbox function drivers/net/hns3/hns3_cmd.c | 3 - drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 181 ++++++++++++---------- drivers/net/hns3/hns3_mbx.c | 244 +++++++++++++----------------- drivers/net/hns3/hns3_mbx.h | 104 +++++++++---- drivers/net/hns3/hns3_rxtx.c | 18 +-- 6 files changed, 290 insertions(+), 262 deletions(-) -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 1/5] net/hns3: fix sync mailbox failure forever 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai @ 2023-11-08 3:44 ` Jie Hai 2023-11-08 3:44 ` [PATCH 2/5] net/hns3: refactor VF mailbox message struct Jie Hai ` (7 subsequent siblings) 8 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-08 3:44 UTC (permalink / raw) To: dev, Yisen Zhuang, Huisong Li, Min Hu (Connor), Chunsong Feng, Wei Hu (Xavier), Hao Chen From: Dengdui Huang <huangdengdui@huawei.com> Currently, hns3 VF driver uses the following points to match the response and request message for the mailbox synchronous message between VF and PF. 1. req_msg_data which is consist of message code and subcode, is used to match request and response. 2. head means the number of send success for VF. 3. tail means the number of receive success for VF. 4. lost means the number of send timeout for VF. And 'head', 'tail' and 'lost' are dynamically updated during the communication. Now there is a issue that all sync mailbox message will send failure forever at the flollowing case: 1. VF sends the message A then head=UINT32_MAX-1, tail=UINT32_MAX-3, lost=2. 2. VF sends the message B then head=UINT32_MAX, tail=UINT32_MAX-2, lost=2. 3. VF sends the message C, the message will be timeout because it can't get the response within 500ms. then head=0, tail=0, lost=2 note: tail is assigned to head if tail > head according to current code logic. From now on, all subsequent sync milbox messages fail to be sent. It's very complicated to use the fields 'lost','tail','head'. The code and subcode of the request sync mailbox are used as the matching code of the message, which is used to match the response message for receiving the synchronization response. This patch drops these fields and uses the following solution to solve this issue: In the handling response message process, using the req_msg_data of the request and response message to judge whether the sync mailbox message has been received. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_cmd.c | 3 -- drivers/net/hns3/hns3_mbx.c | 81 ++++++------------------------------- drivers/net/hns3/hns3_mbx.h | 10 ----- 3 files changed, 13 insertions(+), 81 deletions(-) diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c index a5c4c11dc8c4..2c1664485bef 100644 --- a/drivers/net/hns3/hns3_cmd.c +++ b/drivers/net/hns3/hns3_cmd.c @@ -731,9 +731,6 @@ hns3_cmd_init(struct hns3_hw *hw) hw->cmq.csq.next_to_use = 0; hw->cmq.crq.next_to_clean = 0; hw->cmq.crq.next_to_use = 0; - hw->mbx_resp.head = 0; - hw->mbx_resp.tail = 0; - hw->mbx_resp.lost = 0; hns3_cmd_init_regs(hw); rte_spinlock_unlock(&hw->cmq.crq.lock); diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 8e0a58aa02d8..f1743c195efa 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -40,23 +40,6 @@ hns3_resp_to_errno(uint16_t resp_code) return -EIO; } -static void -hns3_mbx_proc_timeout(struct hns3_hw *hw, uint16_t code, uint16_t subcode) -{ - if (hw->mbx_resp.matching_scheme == - HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL) { - hw->mbx_resp.lost++; - hns3_err(hw, - "VF could not get mbx(%u,%u) head(%u) tail(%u) " - "lost(%u) from PF", - code, subcode, hw->mbx_resp.head, hw->mbx_resp.tail, - hw->mbx_resp.lost); - return; - } - - hns3_err(hw, "VF could not get mbx(%u,%u) from PF", code, subcode); -} - static int hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, uint8_t *resp_data, uint16_t resp_len) @@ -67,7 +50,6 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_mbx_resp_status *mbx_resp; uint32_t wait_time = 0; - bool received; if (resp_len > HNS3_MBX_MAX_RESP_DATA_SIZE) { hns3_err(hw, "VF mbx response len(=%u) exceeds maximum(=%d)", @@ -93,20 +75,14 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, hns3_dev_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); - if (hw->mbx_resp.matching_scheme == - HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL) - received = (hw->mbx_resp.head == - hw->mbx_resp.tail + hw->mbx_resp.lost); - else - received = hw->mbx_resp.received_match_resp; - if (received) + if (hw->mbx_resp.received_match_resp) break; wait_time += HNS3_WAIT_RESP_US; } hw->mbx_resp.req_msg_data = 0; if (wait_time >= mbx_time_limit) { - hns3_mbx_proc_timeout(hw, code, subcode); + hns3_err(hw, "VF could not get mbx(%u,%u) from PF", code, subcode); return -ETIME; } rte_io_rmb(); @@ -132,7 +108,6 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) * we get the exact scheme which is used. */ hw->mbx_resp.req_msg_data = (uint32_t)code << 16 | subcode; - hw->mbx_resp.head++; /* Update match_id and ensure the value of match_id is not zero */ hw->mbx_resp.match_id++; @@ -185,7 +160,6 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, req->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { - hw->mbx_resp.head--; rte_spinlock_unlock(&hw->mbx_resp.lock); hns3_err(hw, "VF failed(=%d) to send mbx message to PF", ret); @@ -254,41 +228,10 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, hns3_schedule_reset(HNS3_DEV_HW_TO_ADAPTER(hw)); } -/* - * Case1: receive response after timeout, req_msg_data - * is 0, not equal resp_msg, do lost-- - * Case2: receive last response during new send_mbx_msg, - * req_msg_data is different with resp_msg, let - * lost--, continue to wait for response. - */ -static void -hns3_update_resp_position(struct hns3_hw *hw, uint32_t resp_msg) -{ - struct hns3_mbx_resp_status *resp = &hw->mbx_resp; - uint32_t tail = resp->tail + 1; - - if (tail > resp->head) - tail = resp->head; - if (resp->req_msg_data != resp_msg) { - if (resp->lost) - resp->lost--; - hns3_warn(hw, "Received a mismatched response req_msg(%x) " - "resp_msg(%x) head(%u) tail(%u) lost(%u)", - resp->req_msg_data, resp_msg, resp->head, tail, - resp->lost); - } else if (tail + resp->lost > resp->head) { - resp->lost--; - hns3_warn(hw, "Received a new response again resp_msg(%x) " - "head(%u) tail(%u) lost(%u)", resp_msg, - resp->head, tail, resp->lost); - } - rte_io_wmb(); - resp->tail = tail; -} - static void hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { +#define HNS3_MBX_RESP_CODE_OFFSET 16 struct hns3_mbx_resp_status *resp = &hw->mbx_resp; uint32_t msg_data; @@ -298,12 +241,6 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * match_id to its response. So VF could use the match_id * to match the request. */ - if (resp->matching_scheme != - HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID) { - resp->matching_scheme = - HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID; - hns3_info(hw, "detect mailbox support match id!"); - } if (req->match_id == resp->match_id) { resp->resp_status = hns3_resp_to_errno(req->msg[3]); memcpy(resp->additional_info, &req->msg[4], @@ -319,11 +256,19 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ + msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + if (resp->req_msg_data != msg_data) { + hns3_warn(hw, + "received response tag (%u) is mismatched with requested tag (%u)", + msg_data, resp->req_msg_data); + return; + } + resp->resp_status = hns3_resp_to_errno(req->msg[3]); memcpy(resp->additional_info, &req->msg[4], HNS3_MBX_MAX_RESP_DATA_SIZE); - msg_data = (uint32_t)req->msg[1] << 16 | req->msg[2]; - hns3_update_resp_position(hw, msg_data); + rte_io_wmb(); + resp->received_match_resp = true; } static void diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index c378783c6ca4..4a328802b920 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -93,21 +93,11 @@ enum hns3_mbx_link_fail_subcode { #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 -enum { - HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL = 0, - HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID -}; - struct hns3_mbx_resp_status { rte_spinlock_t lock; /* protects against contending sync cmd resp */ - uint8_t matching_scheme; - /* The following fields used in the matching scheme for original */ uint32_t req_msg_data; - uint32_t head; - uint32_t tail; - uint32_t lost; /* The following fields used in the matching scheme for match_id */ uint16_t match_id; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 2/5] net/hns3: refactor VF mailbox message struct 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai 2023-11-08 3:44 ` [PATCH 1/5] net/hns3: fix sync mailbox failure forever Jie Hai @ 2023-11-08 3:44 ` Jie Hai 2023-11-09 18:51 ` Ferruh Yigit 2023-11-08 3:44 ` [PATCH 3/5] net/hns3: refactor PF " Jie Hai ` (6 subsequent siblings) 8 siblings, 1 reply; 33+ messages in thread From: Jie Hai @ 2023-11-08 3:44 UTC (permalink / raw) To: dev, Yisen Zhuang, Ferruh Yigit, Min Hu (Connor), Huisong Li, Wei Hu (Xavier), Hao Chen From: Dengdui Huang <huangdengdui@huawei.com> The data region in VF to PF mbx memssage command is used to communicate with PF driver. And this data region exists as an array. As a result, some complicated feature commands, like setting promisc mode, map/unmap ring vector and setting VLAN id, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_vf_to_pf_msg structure. In addition, the PF link change event message is reported by the firmware and is reported in hns3_mbx_vf_to_pf_cmd format, it also needs to be modified. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 54 +++++++++++++--------------- drivers/net/hns3/hns3_mbx.c | 24 ++++++------- drivers/net/hns3/hns3_mbx.h | 58 +++++++++++++++++++++++-------- 3 files changed, 78 insertions(+), 58 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 156fb905f990..d13b0586d334 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -254,11 +254,12 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; - req->msg[1] = en_bc_pmc ? 1 : 0; - req->msg[2] = en_uc_pmc ? 1 : 0; - req->msg[3] = en_mc_pmc ? 1 : 0; - req->msg[4] = hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; + req->msg.code = HNS3_MBX_SET_PROMISC_MODE; + req->msg.en_bc = en_bc_pmc ? 1 : 0; + req->msg.en_uc = en_uc_pmc ? 1 : 0; + req->msg.en_mc = en_mc_pmc ? 1 : 0; + req->msg.en_limit_promisc = + hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; ret = hns3_cmd_send(hw, &desc, 1); if (ret) @@ -347,30 +348,28 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { - struct hns3_vf_bind_vector_msg bind_msg; +#define HNS3_RING_VERCTOR_DATA_SIZE 14 + struct hns3_vf_to_pf_msg req = {0}; const char *op_str; - uint16_t code; int ret; - memset(&bind_msg, 0, sizeof(bind_msg)); - code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : + req.code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : HNS3_MBX_UNMAP_RING_TO_VECTOR; - bind_msg.vector_id = (uint8_t)vector_id; + req.vector_id = (uint8_t)vector_id; + req.ring_num = 1; if (queue_type == HNS3_RING_TYPE_RX) - bind_msg.param[0].int_gl_index = HNS3_RING_GL_RX; + req.ring_param[0].int_gl_index = HNS3_RING_GL_RX; else - bind_msg.param[0].int_gl_index = HNS3_RING_GL_TX; - - bind_msg.param[0].ring_type = queue_type; - bind_msg.ring_num = 1; - bind_msg.param[0].tqp_index = queue_id; + req.ring_param[0].int_gl_index = HNS3_RING_GL_TX; + req.ring_param[0].ring_type = queue_type; + req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, code, 0, (uint8_t *)&bind_msg, - sizeof(bind_msg), false, NULL, 0); + ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, + HNS3_RING_VERCTOR_DATA_SIZE, false, NULL, 0); if (ret) - hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret is %d.", - op_str, queue_id, bind_msg.vector_id, ret); + hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", + op_str, queue_id, req.vector_id, ret); return ret; } @@ -950,19 +949,16 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { -#define HNS3VF_VLAN_MBX_MSG_LEN 5 + struct hns3_mbx_vlan_filter vlan_filter = {0}; struct hns3_hw *hw = &hns->hw; - uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN]; - uint16_t proto = htons(RTE_ETHER_TYPE_VLAN); - uint8_t is_kill = on ? 0 : 1; - msg_data[0] = is_kill; - memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id)); - memcpy(&msg_data[3], &proto, sizeof(proto)); + vlan_filter.is_kill = on ? 0 : 1; + vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - msg_data, HNS3VF_VLAN_MBX_MSG_LEN, true, NULL, - 0); + (uint8_t *)&vlan_filter, sizeof(vlan_filter), + true, NULL, 0); } static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index f1743c195efa..ad5ec555b39e 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -11,8 +11,6 @@ #include "hns3_intr.h" #include "hns3_rxtx.h" -#define HNS3_CMD_CODE_OFFSET 2 - static const struct errno_respcode_map err_code_map[] = { {0, 0}, {1, -EPERM}, @@ -127,29 +125,30 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_mbx_vf_to_pf_cmd *req; struct hns3_cmd_desc desc; bool is_ring_vector_msg; - int offset; int ret; req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; /* first two bytes are reserved for code & subcode */ - if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) { + if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { hns3_err(hw, "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET); + msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); return -EINVAL; } hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = code; + req->msg.code = code; is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || (code == HNS3_MBX_GET_RING_VECTOR_MAP); if (!is_ring_vector_msg) - req->msg[1] = subcode; + req->msg.subcode = subcode; if (msg_data) { - offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET; - memcpy(&req->msg[offset], msg_data, msg_len); + if (is_ring_vector_msg) + memcpy(&req->msg.vector_id, msg_data, msg_len); + else + memcpy(&req->msg.data, msg_data, msg_len); } /* synchronous send */ @@ -296,11 +295,8 @@ static void hns3pf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_vf_to_pf_cmd *req) { -#define LINK_STATUS_OFFSET 1 -#define LINK_FAIL_CODE_OFFSET 2 - - if (!req->msg[LINK_STATUS_OFFSET]) - hns3_link_fail_parse(hw, req->msg[LINK_FAIL_CODE_OFFSET]); + if (!req->msg.link_status) + hns3_link_fail_parse(hw, req->msg.link_fail_code); hns3_update_linkstatus_and_event(hw, true); } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 4a328802b920..3f623ba64ca4 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -89,7 +89,6 @@ enum hns3_mbx_link_fail_subcode { HNS3_MBX_LF_XSFP_ABSENT, }; -#define HNS3_MBX_MAX_MSG_SIZE 16 #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 @@ -107,6 +106,48 @@ struct hns3_mbx_resp_status { uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE]; }; +struct hns3_ring_chain_param { + uint8_t ring_type; + uint8_t tqp_index; + uint8_t int_gl_index; +}; + +#pragma pack(1) +struct hns3_mbx_vlan_filter { + uint8_t is_kill; + uint16_t vlan_id; + uint16_t proto; +}; +#pragma pack() + +#define HNS3_MBX_MSG_MAX_DATA_SIZE 14 +#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 +struct hns3_vf_to_pf_msg { + uint8_t code; + union { + struct { + uint8_t subcode; + uint8_t data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; + struct { + uint8_t en_bc; + uint8_t en_uc; + uint8_t en_mc; + uint8_t en_limit_promisc; + }; + struct { + uint8_t vector_id; + uint8_t ring_num; + struct hns3_ring_chain_param + ring_param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; + }; + struct { + uint8_t link_status; + uint8_t link_fail_code; + }; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -122,7 +163,7 @@ struct hns3_mbx_vf_to_pf_cmd { uint8_t msg_len; uint8_t rsv2; uint16_t match_id; - uint8_t msg[HNS3_MBX_MAX_MSG_SIZE]; + struct hns3_vf_to_pf_msg msg; }; struct hns3_mbx_pf_to_vf_cmd { @@ -134,19 +175,6 @@ struct hns3_mbx_pf_to_vf_cmd { uint16_t msg[8]; }; -struct hns3_ring_chain_param { - uint8_t ring_type; - uint8_t tqp_index; - uint8_t int_gl_index; -}; - -#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 -struct hns3_vf_bind_vector_msg { - uint8_t vector_id; - uint8_t ring_num; - struct hns3_ring_chain_param param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; -}; - struct hns3_pf_rst_done_cmd { uint8_t pf_rst_done; uint8_t rsv[23]; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 2/5] net/hns3: refactor VF mailbox message struct 2023-11-08 3:44 ` [PATCH 2/5] net/hns3: refactor VF mailbox message struct Jie Hai @ 2023-11-09 18:51 ` Ferruh Yigit 0 siblings, 0 replies; 33+ messages in thread From: Ferruh Yigit @ 2023-11-09 18:51 UTC (permalink / raw) To: Jie Hai, dev, Yisen Zhuang, Ferruh Yigit, Min Hu (Connor), Huisong Li, Wei Hu (Xavier), Hao Chen On 11/8/2023 3:44 AM, Jie Hai wrote: > From: Dengdui Huang <huangdengdui@huawei.com> > > The data region in VF to PF mbx memssage command is > s/memssage/message/ Same for next patch > used to communicate with PF driver. And this data > region exists as an array. As a result, some complicated > feature commands, like setting promisc mode, map/unmap > ring vector and setting VLAN id, have to use magic number > to set them. This isn't good for maintenance of driver. > So this patch refactors these messages by extracting an > hns3_vf_to_pf_msg structure. > > In addition, the PF link change event message is reported > by the firmware and is reported in hns3_mbx_vf_to_pf_cmd > format, it also needs to be modified. > > Fixes: 463e748964f5 ("net/hns3: support mailbox") > Cc: stable@dpdk.org > > Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> > Signed-off-by: Jie Hai <haijie1@huawei.com> <...> ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 3/5] net/hns3: refactor PF mailbox message struct 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai 2023-11-08 3:44 ` [PATCH 1/5] net/hns3: fix sync mailbox failure forever Jie Hai 2023-11-08 3:44 ` [PATCH 2/5] net/hns3: refactor VF mailbox message struct Jie Hai @ 2023-11-08 3:44 ` Jie Hai 2023-11-08 3:44 ` [PATCH 4/5] net/hns3: refactor send mailbox function Jie Hai ` (5 subsequent siblings) 8 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-08 3:44 UTC (permalink / raw) To: dev, Yisen Zhuang, Min Hu (Connor), Wei Hu (Xavier), Chunsong Feng, Huisong Li, Hao Chen From: Dengdui Huang <huangdengdui@huawei.com> The data region in PF to VF mbx memssage command is used to communicate with VF driver. And this data region exists as an array. As a result, some complicated feature commands, like mailbox response, link change event, close promisc mode, reset request and update pvid state, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_pf_to_vf_msg structure. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_mbx.c | 38 ++++++++++++++++++------------------- drivers/net/hns3/hns3_mbx.h | 25 +++++++++++++++++++++++- 2 files changed, 43 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index ad5ec555b39e..c90f5d59ba21 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -192,17 +192,17 @@ static void hns3vf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { + struct hns3_mbx_link_status *link_info = + (struct hns3_mbx_link_status *)req->msg.msg_data; uint8_t link_status, link_duplex; - uint16_t *msg_q = req->msg; uint8_t support_push_lsc; uint32_t link_speed; - memcpy(&link_speed, &msg_q[2], sizeof(link_speed)); - link_status = rte_le_to_cpu_16(msg_q[1]); - link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]); - hns3vf_update_link_status(hw, link_status, link_speed, - link_duplex); - support_push_lsc = (*(uint8_t *)&msg_q[5]) & 1u; + link_status = (uint8_t)rte_le_to_cpu_16(link_info->link_status); + link_speed = rte_le_to_cpu_32(link_info->speed); + link_duplex = (uint8_t)rte_le_to_cpu_16(link_info->duplex); + hns3vf_update_link_status(hw, link_status, link_speed, link_duplex); + support_push_lsc = (link_info->flag) & 1u; hns3vf_update_push_lsc_cap(hw, support_push_lsc); } @@ -211,7 +211,6 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { enum hns3_reset_level reset_level; - uint16_t *msg_q = req->msg; /* * PF has asserted reset hence VF should go in pending @@ -219,7 +218,7 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, * has been completely reset. After this stack should * eventually be re-initialized. */ - reset_level = rte_le_to_cpu_16(msg_q[1]); + reset_level = rte_le_to_cpu_16(req->msg.reset_level); hns3_atomic_set_bit(reset_level, &hw->reset.pending); hns3_warn(hw, "PF inform reset level %d", reset_level); @@ -241,8 +240,9 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * to match the request. */ if (req->match_id == resp->match_id) { - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = + hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -255,7 +255,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ - msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + msg_data = (uint32_t)req->msg.vf_mbx_msg_code << + HNS3_MBX_RESP_CODE_OFFSET | req->msg.vf_mbx_msg_subcode; if (resp->req_msg_data != msg_data) { hns3_warn(hw, "received response tag (%u) is mismatched with requested tag (%u)", @@ -263,8 +264,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) return; } - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -305,8 +306,7 @@ static void hns3_update_port_base_vlan_info(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { -#define PVID_STATE_OFFSET 1 - uint16_t new_pvid_state = req->msg[PVID_STATE_OFFSET] ? + uint16_t new_pvid_state = req->msg.pvid_state ? HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE; /* * Currently, hardware doesn't support more than two layers VLAN offload @@ -355,7 +355,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) while (next_to_use != tail) { desc = &crq->desc[next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[next_to_use].flag); if (!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B)) @@ -428,7 +428,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) desc = &crq->desc[crq->next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { @@ -484,7 +484,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) * hns3 PF kernel driver, VF driver will receive this * mailbox message from PF driver. */ - hns3_handle_promisc_info(hw, req->msg[1]); + hns3_handle_promisc_info(hw, req->msg.promisc_en); break; default: hns3_err(hw, "received unsupported(%u) mbx msg", diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 3f623ba64ca4..64f30d2923ea 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -118,6 +118,13 @@ struct hns3_mbx_vlan_filter { uint16_t vlan_id; uint16_t proto; }; + +struct hns3_mbx_link_status { + uint16_t link_status; + uint32_t speed; + uint16_t duplex; + uint8_t flag; +}; #pragma pack() #define HNS3_MBX_MSG_MAX_DATA_SIZE 14 @@ -148,6 +155,22 @@ struct hns3_vf_to_pf_msg { }; }; +struct hns3_pf_to_vf_msg { + uint16_t code; + union { + struct { + uint16_t vf_mbx_msg_code; + uint16_t vf_mbx_msg_subcode; + uint16_t resp_status; + uint8_t resp_data[HNS3_MBX_MAX_RESP_DATA_SIZE]; + }; + uint16_t promisc_en; + uint16_t reset_level; + uint16_t pvid_state; + uint8_t msg_data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -172,7 +195,7 @@ struct hns3_mbx_pf_to_vf_cmd { uint8_t msg_len; uint8_t rsv1; uint16_t match_id; - uint16_t msg[8]; + struct hns3_pf_to_vf_msg msg; }; struct hns3_pf_rst_done_cmd { -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 4/5] net/hns3: refactor send mailbox function 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai ` (2 preceding siblings ...) 2023-11-08 3:44 ` [PATCH 3/5] net/hns3: refactor PF " Jie Hai @ 2023-11-08 3:44 ` Jie Hai 2023-11-08 3:44 ` [PATCH 5/5] net/hns3: refactor handle " Jie Hai ` (4 subsequent siblings) 8 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-08 3:44 UTC (permalink / raw) To: dev, Yisen Zhuang, Chunsong Feng, Ferruh Yigit, Hao Chen, Huisong Li, Wei Hu (Xavier) From: Dengdui Huang <huangdengdui@huawei.com> The 'hns3_send_mbx_msg' function has following problem: 1. the name is vague, missing caller indication 2. too many input parameters because the filling messages are placed in commands the send command. Therefore, a common interface is encapsulated to fill in the mailbox message before sending it. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 141 ++++++++++++++++++------------ drivers/net/hns3/hns3_mbx.c | 50 ++++------- drivers/net/hns3/hns3_mbx.h | 8 +- drivers/net/hns3/hns3_rxtx.c | 18 ++-- 4 files changed, 116 insertions(+), 101 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index d13b0586d334..2da73857ac56 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -91,11 +91,13 @@ hns3vf_add_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes, - RTE_ETHER_ADDR_LEN, false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -110,12 +112,13 @@ hns3vf_remove_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -134,6 +137,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *old_addr; uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; /* @@ -146,9 +150,10 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes, RTE_ETHER_ADDR_LEN); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes, - HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_MODIFY); + memcpy(req.data, addr_bytes, HNS3_TWO_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) { /* * The hns3 VF PMD depends on the hns3 PF kernel ethdev @@ -185,12 +190,13 @@ hns3vf_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_ADD, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -206,12 +212,13 @@ hns3vf_remove_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -348,7 +355,6 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { -#define HNS3_RING_VERCTOR_DATA_SIZE 14 struct hns3_vf_to_pf_msg req = {0}; const char *op_str; int ret; @@ -365,8 +371,7 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, req.ring_param[0].ring_type = queue_type; req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, - HNS3_RING_VERCTOR_DATA_SIZE, false, NULL, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", op_str, queue_id, req.vector_id, ret); @@ -452,10 +457,12 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) static int hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu) { + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu, - sizeof(mtu), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MTU, 0); + memcpy(req.data, &mtu, sizeof(mtu)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret); @@ -631,12 +638,13 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) uint16_t val = HNS3_PF_PUSH_LSC_CAP_NOT_SUPPORTED; uint16_t exp = HNS3_PF_PUSH_LSC_CAP_UNKNOWN; struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; __atomic_store_n(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, __ATOMIC_RELEASE); - (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + (void)hns3vf_mbx_send(hw, &req, false, NULL, 0); while (remain_ms > 0) { rte_delay_ms(HNS3_POLL_RESPONE_MS); @@ -731,12 +739,13 @@ hns3vf_check_tqp_info(struct hns3_hw *hw) static int hns3vf_get_port_base_vlan_filter_state(struct hns3_hw *hw) { + struct hns3_vf_to_pf_msg req; uint8_t resp_msg; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_GET_PORT_BASE_VLAN_STATE, NULL, 0, - true, &resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_GET_PORT_BASE_VLAN_STATE); + ret = hns3vf_mbx_send(hw, &req, true, &resp_msg, sizeof(resp_msg)); if (ret) { if (ret == -ETIME) { /* @@ -777,10 +786,12 @@ hns3vf_get_queue_info(struct hns3_hw *hw) { #define HNS3VF_TQPS_RSS_INFO_LEN 6 uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true, - resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_QINFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, + resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); if (ret) { PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret); return ret; @@ -818,10 +829,11 @@ hns3vf_get_basic_info(struct hns3_hw *hw) { uint8_t resp_msg[HNS3_MBX_MAX_RESP_DATA_SIZE]; struct hns3_basic_info *basic_info; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_BASIC_INFO, 0, NULL, 0, - true, resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_BASIC_INFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, resp_msg, sizeof(resp_msg)); if (ret) { hns3_err(hw, "failed to get basic info from PF, ret = %d.", ret); @@ -841,10 +853,11 @@ static int hns3vf_get_host_mac_addr(struct hns3_hw *hw) { uint8_t host_mac[RTE_ETHER_ADDR_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0, - true, host_mac, RTE_ETHER_ADDR_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_MAC_ADDR, 0); + ret = hns3vf_mbx_send(hw, &req, true, host_mac, RTE_ETHER_ADDR_LEN); if (ret) { hns3_err(hw, "Failed to get mac addr from PF: %d", ret); return ret; @@ -893,6 +906,7 @@ static void hns3vf_request_link_info(struct hns3_hw *hw) { struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; bool send_req; int ret; @@ -904,8 +918,8 @@ hns3vf_request_link_info(struct hns3_hw *hw) if (!send_req) return; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_err(hw, "failed to fetch link status, ret = %d", ret); return; @@ -949,16 +963,18 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { - struct hns3_mbx_vlan_filter vlan_filter = {0}; + struct hns3_mbx_vlan_filter *vlan_filter; + struct hns3_vf_to_pf_msg req = {0}; struct hns3_hw *hw = &hns->hw; - vlan_filter.is_kill = on ? 0 : 1; - vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); - vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); + req.code = HNS3_MBX_SET_VLAN; + req.subcode = HNS3_MBX_VLAN_FILTER; + vlan_filter = (struct hns3_mbx_vlan_filter *)req.data; + vlan_filter->is_kill = on ? 0 : 1; + vlan_filter->proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter->vlan_id = rte_cpu_to_le_16(vlan_id); - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - (uint8_t *)&vlan_filter, sizeof(vlan_filter), - true, NULL, 0); + return hns3vf_mbx_send(hw, &req, true, NULL, 0); } static int @@ -987,6 +1003,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) static int hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; @@ -994,9 +1011,10 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) return 0; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_ENABLE_VLAN_FILTER, &msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_ENABLE_VLAN_FILTER); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "%s vlan filter failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1007,12 +1025,15 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) static int hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG, - &msg_data, sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_VLAN_RX_OFF_CFG); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "vf %s strip failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1156,11 +1177,13 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev) static int hns3vf_set_alive(struct hns3_hw *hw, bool alive) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; msg_data = alive ? 1 : 0; - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data, - sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_ALIVE, 0); + memcpy(req.data, &msg_data, sizeof(msg_data)); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static void @@ -1168,11 +1191,12 @@ hns3vf_keep_alive_handler(void *param) { struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; struct hns3_adapter *hns = eth_dev->data->dev_private; + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_KEEP_ALIVE, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "VF sends keeping alive cmd failed(=%d)", ret); @@ -1311,9 +1335,11 @@ hns3vf_init_hardware(struct hns3_adapter *hns) static int hns3vf_clear_vport_list(struct hns3_hw *hw) { - return hns3_send_mbx_msg(hw, HNS3_MBX_HANDLE_VF_TBL, - HNS3_MBX_VPORT_LIST_CLEAR, NULL, 0, false, - NULL, 0); + struct hns3_vf_to_pf_msg req; + + hns3vf_mbx_setup(&req, HNS3_MBX_HANDLE_VF_TBL, + HNS3_MBX_VPORT_LIST_CLEAR); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static int @@ -1782,12 +1808,13 @@ hns3vf_wait_hardware_ready(struct hns3_adapter *hns) static int hns3vf_prepare_reset(struct hns3_adapter *hns) { + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; if (hw->reset.level == HNS3_VF_FUNC_RESET) { - ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL, - 0, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) return ret; } diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index c90f5d59ba21..43195ff184b1 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -24,6 +24,14 @@ static const struct errno_respcode_map err_code_map[] = { {95, -EOPNOTSUPP}, }; +void +hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode) +{ + memset(req, 0, sizeof(struct hns3_vf_to_pf_msg)); + req->code = code; + req->subcode = subcode; +} + static int hns3_resp_to_errno(uint16_t resp_code) { @@ -118,45 +126,24 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) } int -hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len) +hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req, bool need_resp, + uint8_t *resp_data, uint16_t resp_len) { - struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_mbx_vf_to_pf_cmd *cmd; struct hns3_cmd_desc desc; - bool is_ring_vector_msg; int ret; - req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; - - /* first two bytes are reserved for code & subcode */ - if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { - hns3_err(hw, - "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); - return -EINVAL; - } - hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg.code = code; - is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || - (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || - (code == HNS3_MBX_GET_RING_VECTOR_MAP); - if (!is_ring_vector_msg) - req->msg.subcode = subcode; - if (msg_data) { - if (is_ring_vector_msg) - memcpy(&req->msg.vector_id, msg_data, msg_len); - else - memcpy(&req->msg.data, msg_data, msg_len); - } + cmd = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; + cmd->msg = *req; /* synchronous send */ if (need_resp) { - req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; + cmd->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; rte_spinlock_lock(&hw->mbx_resp.lock); - hns3_mbx_prepare_resp(hw, code, subcode); - req->match_id = hw->mbx_resp.match_id; + hns3_mbx_prepare_resp(hw, req->code, req->subcode); + cmd->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { rte_spinlock_unlock(&hw->mbx_resp.lock); @@ -165,7 +152,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return ret; } - ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len); + ret = hns3_get_mbx_resp(hw, req->code, req->subcode, + resp_data, resp_len); rte_spinlock_unlock(&hw->mbx_resp.lock); } else { /* asynchronous send */ diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 64f30d2923ea..360e91c30eb9 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -210,7 +210,9 @@ struct hns3_pf_rst_done_cmd { struct hns3_hw; void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); -int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len); +void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, + uint8_t code, uint8_t subcode); +int hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req_msg, bool need_resp, + uint8_t *resp_data, uint16_t resp_len); #endif /* HNS3_MBX_H */ diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 09b7e90c7000..9087bcffed9b 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -686,13 +686,12 @@ hns3pf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) static int hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) { - uint8_t msg_data[2]; + struct hns3_vf_to_pf_msg req; int ret; - memcpy(msg_data, &queue_id, sizeof(uint16_t)); - - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + memcpy(req.data, &queue_id, sizeof(uint16_t)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "fail to reset tqp, queue_id = %u, ret = %d.", queue_id, ret); @@ -769,15 +768,14 @@ static int hns3vf_reset_all_tqps(struct hns3_hw *hw) { #define HNS3VF_RESET_ALL_TQP_DONE 1U + struct hns3_vf_to_pf_msg req; uint8_t reset_status; - uint8_t msg_data[2]; int ret; uint16_t i; - memset(msg_data, 0, sizeof(msg_data)); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, &reset_status, - sizeof(reset_status)); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, + &reset_status, sizeof(reset_status)); if (ret) { hns3_err(hw, "fail to send rcb reset mbx, ret = %d.", ret); return ret; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 5/5] net/hns3: refactor handle mailbox function 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai ` (3 preceding siblings ...) 2023-11-08 3:44 ` [PATCH 4/5] net/hns3: refactor send mailbox function Jie Hai @ 2023-11-08 3:44 ` Jie Hai 2023-11-09 18:50 ` [PATCH 0/5] net/hns3: fix and refactor mailbox code Ferruh Yigit ` (3 subsequent siblings) 8 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-08 3:44 UTC (permalink / raw) To: dev, Yisen Zhuang, Huisong Li, Chunsong Feng, Hao Chen, Wei Hu (Xavier), Min Hu (Connor), Hongbo Zheng From: Dengdui Huang <huangdengdui@huawei.com> The mailbox messages of the PF and VF are processed in the same function. The PF and VF call the same function to process the messages. This code is excessive coupling and isn't good for maintenance. Therefore, this patch separates the interfaces that handle PF mailbox message and handle VF mailbox message. Fixes: 463e748964f5 ("net/hns3: support mailbox") Fixes: 109e4dd1bd7a ("net/hns3: get link state change through mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 4 +- drivers/net/hns3/hns3_mbx.c | 69 ++++++++++++++++++++++++------- drivers/net/hns3/hns3_mbx.h | 3 +- 4 files changed, 58 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 941d047bf1bd..18543e88edc7 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -380,7 +380,7 @@ hns3_interrupt_handler(void *param) hns3_warn(hw, "received reset interrupt"); hns3_schedule_reset(hns); } else if (event_cause == HNS3_VECTOR0_EVENT_MBX) { - hns3_dev_handle_mbx_msg(hw); + hns3pf_handle_mbx_msg(hw); } else if (event_cause != HNS3_VECTOR0_EVENT_PTP) { hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x " "ras_int_stat:0x%x cmdq_int_stat:0x%x", diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 2da73857ac56..3ae4f965bb0b 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -605,7 +605,7 @@ hns3vf_interrupt_handler(void *param) hns3_schedule_reset(hns); break; case HNS3VF_VECTOR0_EVENT_MBX: - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); break; default: break; @@ -655,7 +655,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE * mailbox from PF driver to get this capability. */ - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break; diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 43195ff184b1..9cdbc1668a17 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -78,7 +78,7 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return -EIO; } - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); if (hw->mbx_resp.received_match_resp) @@ -372,9 +372,57 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) } void -hns3_dev_handle_mbx_msg(struct hns3_hw *hw) +hns3pf_handle_mbx_msg(struct hns3_hw *hw) +{ + struct hns3_cmq_ring *crq = &hw->cmq.crq; + struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_cmd_desc *desc; + uint16_t flag; + + rte_spinlock_lock(&hw->cmq.crq.lock); + + while (!hns3_cmd_crq_empty(hw)) { + if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { + rte_spinlock_unlock(&hw->cmq.crq.lock); + return; + } + desc = &crq->desc[crq->next_to_use]; + req = (struct hns3_mbx_vf_to_pf_cmd *)desc->data; + + flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); + if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { + hns3_warn(hw, + "dropped invalid mailbox message, code = %u", + req->msg.code); + + /* dropping/not processing this invalid message */ + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + continue; + } + + switch (req->msg.code) { + case HNS3_MBX_PUSH_LINK_STATUS: + hns3pf_handle_link_change_event(hw, req); + break; + default: + hns3_err(hw, "received unsupported(%u) mbx msg", + req->msg.code); + break; + } + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + } + + /* Write back CMDQ_RQ header pointer, IMP need this pointer */ + hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use); + + rte_spinlock_unlock(&hw->cmq.crq.lock); +} + +void +hns3vf_handle_mbx_msg(struct hns3_hw *hw) { - struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_cmq_ring *crq = &hw->cmq.crq; struct hns3_mbx_pf_to_vf_cmd *req; struct hns3_cmd_desc *desc; @@ -385,7 +433,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) rte_spinlock_lock(&hw->cmq.crq.lock); handle_out = (rte_eal_process_type() != RTE_PROC_PRIMARY || - !rte_thread_is_intr()) && hns->is_vf; + !rte_thread_is_intr()); if (handle_out) { /* * Currently, any threads in the primary and secondary processes @@ -430,8 +478,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) continue; } - handle_out = hns->is_vf && desc->opcode == 0; - if (handle_out) { + if (desc->opcode == 0) { /* Message already processed by other thread */ crq->desc[crq->next_to_use].flag = 0; hns3_mbx_ring_ptr_move_crq(crq); @@ -448,16 +495,6 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) case HNS3_MBX_ASSERTING_RESET: hns3_handle_asserting_reset(hw, req); break; - case HNS3_MBX_PUSH_LINK_STATUS: - /* - * This message is reported by the firmware and is - * reported in 'struct hns3_mbx_vf_to_pf_cmd' format. - * Therefore, we should cast the req variable to - * 'struct hns3_mbx_vf_to_pf_cmd' and then process it. - */ - hns3pf_handle_link_change_event(hw, - (struct hns3_mbx_vf_to_pf_cmd *)req); - break; case HNS3_MBX_PUSH_VLAN_INFO: /* * When the PVID configuration status of VF device is diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 360e91c30eb9..967d9df3bcac 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -209,7 +209,8 @@ struct hns3_pf_rst_done_cmd { ((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num) struct hns3_hw; -void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); +void hns3pf_handle_mbx_msg(struct hns3_hw *hw); +void hns3vf_handle_mbx_msg(struct hns3_hw *hw); void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode); int hns3vf_mbx_send(struct hns3_hw *hw, -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 0/5] net/hns3: fix and refactor mailbox code 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai ` (4 preceding siblings ...) 2023-11-08 3:44 ` [PATCH 5/5] net/hns3: refactor handle " Jie Hai @ 2023-11-09 18:50 ` Ferruh Yigit 2023-11-10 6:21 ` Jie Hai 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai ` (2 subsequent siblings) 8 siblings, 1 reply; 33+ messages in thread From: Ferruh Yigit @ 2023-11-09 18:50 UTC (permalink / raw) To: Jie Hai, dev On 11/8/2023 3:44 AM, Jie Hai wrote: > This patchset fixes failure on sync mailbox and refactors some codes on mailbox. > > Dengdui Huang (5): > net/hns3: fix sync mailbox failure forever > net/hns3: refactor VF mailbox message struct > net/hns3: refactor PF mailbox message struct > net/hns3: refactor send mailbox function > net/hns3: refactor handle mailbox function > Hi Jie, Overall patchset looks good with minor issue below [1], but this set has high impact and not solving a critical defect etc, but mainly refactoring. We are very close to the release, there won't be enough time to fix any issue caused by this refactoring. My suggestion is to postpone the refactoring to next release, maybe get only the first fix patch in this release, what do you think? [1] Can you please fix the checkpatch warning: ### [PATCH] net/hns3: refactor handle mailbox function Warning in drivers/net/hns3/hns3_mbx.c: Using __atomic_xxx/__ATOMIC_XXX built-ins, prefer rte_atomic_xxx/rte_memory_order_xxx ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 0/5] net/hns3: fix and refactor mailbox code 2023-11-09 18:50 ` [PATCH 0/5] net/hns3: fix and refactor mailbox code Ferruh Yigit @ 2023-11-10 6:21 ` Jie Hai 2023-11-10 12:35 ` Ferruh Yigit 0 siblings, 1 reply; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:21 UTC (permalink / raw) To: Ferruh Yigit, dev On 2023/11/10 2:50, Ferruh Yigit wrote: > On 11/8/2023 3:44 AM, Jie Hai wrote: >> This patchset fixes failure on sync mailbox and refactors some codes on mailbox. >> >> Dengdui Huang (5): >> net/hns3: fix sync mailbox failure forever >> net/hns3: refactor VF mailbox message struct >> net/hns3: refactor PF mailbox message struct >> net/hns3: refactor send mailbox function >> net/hns3: refactor handle mailbox function >> > > Hi Jie, > > Overall patchset looks good with minor issue below [1], but this set has > high impact and not solving a critical defect etc, but mainly refactoring. > We are very close to the release, there won't be enough time to fix any > issue caused by this refactoring. > > My suggestion is to postpone the refactoring to next release, maybe get > only the first fix patch in this release, what do you think? > I think it's OK. > > > [1] > Can you please fix the checkpatch warning: > > ### [PATCH] net/hns3: refactor handle mailbox function > > Warning in drivers/net/hns3/hns3_mbx.c: > Using __atomic_xxx/__ATOMIC_XXX built-ins, prefer > rte_atomic_xxx/rte_memory_order_xxx > Thanks, I'will send V2 with these fixes. > . ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 0/5] net/hns3: fix and refactor mailbox code 2023-11-10 6:21 ` Jie Hai @ 2023-11-10 12:35 ` Ferruh Yigit 0 siblings, 0 replies; 33+ messages in thread From: Ferruh Yigit @ 2023-11-10 12:35 UTC (permalink / raw) To: Jie Hai, dev On 11/10/2023 6:21 AM, Jie Hai wrote: > On 2023/11/10 2:50, Ferruh Yigit wrote: >> On 11/8/2023 3:44 AM, Jie Hai wrote: >>> This patchset fixes failure on sync mailbox and refactors some codes >>> on mailbox. >>> >>> Dengdui Huang (5): >>> net/hns3: fix sync mailbox failure forever >>> net/hns3: refactor VF mailbox message struct >>> net/hns3: refactor PF mailbox message struct >>> net/hns3: refactor send mailbox function >>> net/hns3: refactor handle mailbox function >>> >> >> Hi Jie, >> >> Overall patchset looks good with minor issue below [1], but this set has >> high impact and not solving a critical defect etc, but mainly >> refactoring. >> We are very close to the release, there won't be enough time to fix any >> issue caused by this refactoring. >> >> My suggestion is to postpone the refactoring to next release, maybe get >> only the first fix patch in this release, what do you think? >> > I think it's OK. > OK, I will proceed with it, please be sure it is fully verified. >> >> >> [1] >> Can you please fix the checkpatch warning: >> >> ### [PATCH] net/hns3: refactor handle mailbox function >> >> Warning in drivers/net/hns3/hns3_mbx.c: >> Using __atomic_xxx/__ATOMIC_XXX built-ins, prefer >> rte_atomic_xxx/rte_memory_order_xxx >> > Thanks, I'will send V2 with these fixes. >> . ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 0/6] net/hns3: fix and refactor some codes 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai ` (5 preceding siblings ...) 2023-11-09 18:50 ` [PATCH 0/5] net/hns3: fix and refactor mailbox code Ferruh Yigit @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 6:13 ` [PATCH v2 1/6] net/hns3: fix sync mailbox failure forever Jie Hai ` (6 more replies) 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai 2023-12-08 6:55 ` [PATCH v4 " Jie Hai 8 siblings, 7 replies; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit; +Cc: lihuisong, fengchengwen, liudongdong3 This patchset fixes the failure on sync mailbox and refactors some codes on mailbox, also replace gcc builtin __atomic_xxx with rte_atomic_xxx. -- v2: 1. fix misspelling error in commit log and codes. 2. replace __atomic_xxx with rte_atomic_xxx. -- Dengdui Huang (5): net/hns3: fix sync mailbox failure forever net/hns3: refactor VF mailbox message struct net/hns3: refactor PF mailbox message struct net/hns3: refactor send mailbox function net/hns3: refactor handle mailbox function Jie Hai (1): net/hns3: use stdatomic API drivers/net/hns3/hns3_cmd.c | 25 +-- drivers/net/hns3/hns3_dcb.c | 3 +- drivers/net/hns3/hns3_ethdev.c | 53 ++++--- drivers/net/hns3/hns3_ethdev.h | 12 +- drivers/net/hns3/hns3_ethdev_vf.c | 237 +++++++++++++++------------- drivers/net/hns3/hns3_intr.c | 39 +++-- drivers/net/hns3/hns3_mbx.c | 251 +++++++++++++----------------- drivers/net/hns3/hns3_mbx.h | 104 +++++++++---- drivers/net/hns3/hns3_mp.c | 9 +- drivers/net/hns3/hns3_rxtx.c | 33 ++-- drivers/net/hns3/hns3_tm.c | 6 +- 11 files changed, 421 insertions(+), 351 deletions(-) -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 1/6] net/hns3: fix sync mailbox failure forever 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 6:13 ` [PATCH v2 2/6] net/hns3: use stdatomic API Jie Hai ` (5 subsequent siblings) 6 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit, Yisen Zhuang, Wei Hu (Xavier), Ferruh Yigit, Min Hu (Connor), Chunsong Feng, Hao Chen Cc: lihuisong, fengchengwen, liudongdong3 From: Dengdui Huang <huangdengdui@huawei.com> Currently, hns3 VF driver uses the following points to match the response and request message for the mailbox synchronous message between VF and PF. 1. req_msg_data which is consist of message code and subcode, is used to match request and response. 2. head means the number of send success for VF. 3. tail means the number of receive success for VF. 4. lost means the number of send timeout for VF. And 'head', 'tail' and 'lost' are dynamically updated during the communication. Now there is a issue that all sync mailbox message will send failure forever at the flollowing case: 1. VF sends the message A then head=UINT32_MAX-1, tail=UINT32_MAX-3, lost=2. 2. VF sends the message B then head=UINT32_MAX, tail=UINT32_MAX-2, lost=2. 3. VF sends the message C, the message will be timeout because it can't get the response within 500ms. then head=0, tail=0, lost=2 note: tail is assigned to head if tail > head according to current code logic. From now on, all subsequent sync milbox messages fail to be sent. It's very complicated to use the fields 'lost','tail','head'. The code and subcode of the request sync mailbox are used as the matching code of the message, which is used to match the response message for receiving the synchronization response. This patch drops these fields and uses the following solution to solve this issue: In the handling response message process, using the req_msg_data of the request and response message to judge whether the sync mailbox message has been received. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_cmd.c | 3 -- drivers/net/hns3/hns3_mbx.c | 81 ++++++------------------------------- drivers/net/hns3/hns3_mbx.h | 10 ----- 3 files changed, 13 insertions(+), 81 deletions(-) diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c index a5c4c11dc8c4..2c1664485bef 100644 --- a/drivers/net/hns3/hns3_cmd.c +++ b/drivers/net/hns3/hns3_cmd.c @@ -731,9 +731,6 @@ hns3_cmd_init(struct hns3_hw *hw) hw->cmq.csq.next_to_use = 0; hw->cmq.crq.next_to_clean = 0; hw->cmq.crq.next_to_use = 0; - hw->mbx_resp.head = 0; - hw->mbx_resp.tail = 0; - hw->mbx_resp.lost = 0; hns3_cmd_init_regs(hw); rte_spinlock_unlock(&hw->cmq.crq.lock); diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 8e0a58aa02d8..f1743c195efa 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -40,23 +40,6 @@ hns3_resp_to_errno(uint16_t resp_code) return -EIO; } -static void -hns3_mbx_proc_timeout(struct hns3_hw *hw, uint16_t code, uint16_t subcode) -{ - if (hw->mbx_resp.matching_scheme == - HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL) { - hw->mbx_resp.lost++; - hns3_err(hw, - "VF could not get mbx(%u,%u) head(%u) tail(%u) " - "lost(%u) from PF", - code, subcode, hw->mbx_resp.head, hw->mbx_resp.tail, - hw->mbx_resp.lost); - return; - } - - hns3_err(hw, "VF could not get mbx(%u,%u) from PF", code, subcode); -} - static int hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, uint8_t *resp_data, uint16_t resp_len) @@ -67,7 +50,6 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_mbx_resp_status *mbx_resp; uint32_t wait_time = 0; - bool received; if (resp_len > HNS3_MBX_MAX_RESP_DATA_SIZE) { hns3_err(hw, "VF mbx response len(=%u) exceeds maximum(=%d)", @@ -93,20 +75,14 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, hns3_dev_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); - if (hw->mbx_resp.matching_scheme == - HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL) - received = (hw->mbx_resp.head == - hw->mbx_resp.tail + hw->mbx_resp.lost); - else - received = hw->mbx_resp.received_match_resp; - if (received) + if (hw->mbx_resp.received_match_resp) break; wait_time += HNS3_WAIT_RESP_US; } hw->mbx_resp.req_msg_data = 0; if (wait_time >= mbx_time_limit) { - hns3_mbx_proc_timeout(hw, code, subcode); + hns3_err(hw, "VF could not get mbx(%u,%u) from PF", code, subcode); return -ETIME; } rte_io_rmb(); @@ -132,7 +108,6 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) * we get the exact scheme which is used. */ hw->mbx_resp.req_msg_data = (uint32_t)code << 16 | subcode; - hw->mbx_resp.head++; /* Update match_id and ensure the value of match_id is not zero */ hw->mbx_resp.match_id++; @@ -185,7 +160,6 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, req->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { - hw->mbx_resp.head--; rte_spinlock_unlock(&hw->mbx_resp.lock); hns3_err(hw, "VF failed(=%d) to send mbx message to PF", ret); @@ -254,41 +228,10 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, hns3_schedule_reset(HNS3_DEV_HW_TO_ADAPTER(hw)); } -/* - * Case1: receive response after timeout, req_msg_data - * is 0, not equal resp_msg, do lost-- - * Case2: receive last response during new send_mbx_msg, - * req_msg_data is different with resp_msg, let - * lost--, continue to wait for response. - */ -static void -hns3_update_resp_position(struct hns3_hw *hw, uint32_t resp_msg) -{ - struct hns3_mbx_resp_status *resp = &hw->mbx_resp; - uint32_t tail = resp->tail + 1; - - if (tail > resp->head) - tail = resp->head; - if (resp->req_msg_data != resp_msg) { - if (resp->lost) - resp->lost--; - hns3_warn(hw, "Received a mismatched response req_msg(%x) " - "resp_msg(%x) head(%u) tail(%u) lost(%u)", - resp->req_msg_data, resp_msg, resp->head, tail, - resp->lost); - } else if (tail + resp->lost > resp->head) { - resp->lost--; - hns3_warn(hw, "Received a new response again resp_msg(%x) " - "head(%u) tail(%u) lost(%u)", resp_msg, - resp->head, tail, resp->lost); - } - rte_io_wmb(); - resp->tail = tail; -} - static void hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { +#define HNS3_MBX_RESP_CODE_OFFSET 16 struct hns3_mbx_resp_status *resp = &hw->mbx_resp; uint32_t msg_data; @@ -298,12 +241,6 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * match_id to its response. So VF could use the match_id * to match the request. */ - if (resp->matching_scheme != - HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID) { - resp->matching_scheme = - HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID; - hns3_info(hw, "detect mailbox support match id!"); - } if (req->match_id == resp->match_id) { resp->resp_status = hns3_resp_to_errno(req->msg[3]); memcpy(resp->additional_info, &req->msg[4], @@ -319,11 +256,19 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ + msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + if (resp->req_msg_data != msg_data) { + hns3_warn(hw, + "received response tag (%u) is mismatched with requested tag (%u)", + msg_data, resp->req_msg_data); + return; + } + resp->resp_status = hns3_resp_to_errno(req->msg[3]); memcpy(resp->additional_info, &req->msg[4], HNS3_MBX_MAX_RESP_DATA_SIZE); - msg_data = (uint32_t)req->msg[1] << 16 | req->msg[2]; - hns3_update_resp_position(hw, msg_data); + rte_io_wmb(); + resp->received_match_resp = true; } static void diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index c378783c6ca4..4a328802b920 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -93,21 +93,11 @@ enum hns3_mbx_link_fail_subcode { #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 -enum { - HNS3_MBX_RESP_MATCHING_SCHEME_OF_ORIGINAL = 0, - HNS3_MBX_RESP_MATCHING_SCHEME_OF_MATCH_ID -}; - struct hns3_mbx_resp_status { rte_spinlock_t lock; /* protects against contending sync cmd resp */ - uint8_t matching_scheme; - /* The following fields used in the matching scheme for original */ uint32_t req_msg_data; - uint32_t head; - uint32_t tail; - uint32_t lost; /* The following fields used in the matching scheme for match_id */ uint16_t match_id; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 2/6] net/hns3: use stdatomic API 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai 2023-11-10 6:13 ` [PATCH v2 1/6] net/hns3: fix sync mailbox failure forever Jie Hai @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 6:13 ` [PATCH v2 3/6] net/hns3: refactor VF mailbox message struct Jie Hai ` (4 subsequent siblings) 6 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit, Yisen Zhuang; +Cc: lihuisong, fengchengwen, liudongdong3 Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_cmd.c | 22 +++++++----- drivers/net/hns3/hns3_dcb.c | 3 +- drivers/net/hns3/hns3_ethdev.c | 51 ++++++++++++++++----------- drivers/net/hns3/hns3_ethdev.h | 12 ++++--- drivers/net/hns3/hns3_ethdev_vf.c | 57 ++++++++++++++++--------------- drivers/net/hns3/hns3_intr.c | 39 ++++++++++++--------- drivers/net/hns3/hns3_mbx.c | 6 ++-- drivers/net/hns3/hns3_mp.c | 9 +++-- drivers/net/hns3/hns3_rxtx.c | 15 +++++--- drivers/net/hns3/hns3_tm.c | 6 ++-- 10 files changed, 131 insertions(+), 89 deletions(-) diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c index 2c1664485bef..4e1a02a75e0f 100644 --- a/drivers/net/hns3/hns3_cmd.c +++ b/drivers/net/hns3/hns3_cmd.c @@ -49,7 +49,8 @@ hns3_allocate_dma_mem(struct hns3_hw *hw, struct hns3_cmq_ring *ring, char z_name[RTE_MEMZONE_NAMESIZE]; snprintf(z_name, sizeof(z_name), "hns3_dma_%" PRIu64, - __atomic_fetch_add(&hns3_dma_memzone_id, 1, __ATOMIC_RELAXED)); + rte_atomic_fetch_add_explicit(&hns3_dma_memzone_id, 1, + rte_memory_order_relaxed)); mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, RTE_MEMZONE_IOVA_CONTIG, alignment, RTE_PGSIZE_2M); @@ -198,8 +199,8 @@ hns3_cmd_csq_clean(struct hns3_hw *hw) hns3_err(hw, "wrong cmd addr(%0x) head (%u, %u-%u)", addr, head, csq->next_to_use, csq->next_to_clean); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - __atomic_store_n(&hw->reset.disable_cmd, 1, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); hns3_schedule_delayed_reset(HNS3_DEV_HW_TO_ADAPTER(hw)); } @@ -313,7 +314,8 @@ static int hns3_cmd_poll_reply(struct hns3_hw *hw) if (hns3_cmd_csq_done(hw)) return 0; - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) { hns3_err(hw, "Don't wait for reply because of disable_cmd"); return -EBUSY; @@ -360,7 +362,8 @@ hns3_cmd_send(struct hns3_hw *hw, struct hns3_cmd_desc *desc, int num) int retval; uint32_t ntc; - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) return -EBUSY; rte_spinlock_lock(&hw->cmq.csq.lock); @@ -745,7 +748,8 @@ hns3_cmd_init(struct hns3_hw *hw) ret = -EBUSY; goto err_cmd_init; } - __atomic_store_n(&hw->reset.disable_cmd, 0, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 0, + rte_memory_order_relaxed); ret = hns3_cmd_query_firmware_version_and_capability(hw); if (ret) { @@ -788,7 +792,8 @@ hns3_cmd_init(struct hns3_hw *hw) return 0; err_cmd_init: - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); return ret; } @@ -817,7 +822,8 @@ hns3_cmd_uninit(struct hns3_hw *hw) if (!hns->is_vf) (void)hns3_firmware_compat_config(hw, false); - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); /* * A delay is added to ensure that the register cleanup operations diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c index 2831d3dc6205..08c77e04857d 100644 --- a/drivers/net/hns3/hns3_dcb.c +++ b/drivers/net/hns3/hns3_dcb.c @@ -648,7 +648,8 @@ hns3_set_rss_size(struct hns3_hw *hw, uint16_t nb_rx_q) * and configured directly to the hardware in the RESET_STAGE_RESTORE * stage of the reset process. */ - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed) == 0) { for (i = 0; i < hw->rss_ind_tbl_size; i++) rss_cfg->rss_indirection_tbl[i] = i % hw->alloc_rss_size; diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 941d047bf1bd..4b63308e8fdf 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -134,7 +134,8 @@ hns3_proc_imp_reset_event(struct hns3_adapter *hns, uint32_t *vec_val) { struct hns3_hw *hw = &hns->hw; - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending); *vec_val = BIT(HNS3_VECTOR0_IMPRESET_INT_B); hw->reset.stats.imp_cnt++; @@ -148,7 +149,8 @@ hns3_proc_global_reset_event(struct hns3_adapter *hns, uint32_t *vec_val) { struct hns3_hw *hw = &hns->hw; - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); hns3_atomic_set_bit(HNS3_GLOBAL_RESET, &hw->reset.pending); *vec_val = BIT(HNS3_VECTOR0_GLOBALRESET_INT_B); hw->reset.stats.global_cnt++; @@ -1151,7 +1153,8 @@ hns3_init_vlan_config(struct hns3_adapter *hns) * ensure that the hardware configuration remains unchanged before and * after reset. */ - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed) == 0) { hw->port_base_vlan_cfg.state = HNS3_PORT_BASE_VLAN_DISABLE; hw->port_base_vlan_cfg.pvid = HNS3_INVALID_PVID; } @@ -1175,7 +1178,8 @@ hns3_init_vlan_config(struct hns3_adapter *hns) * we will restore configurations to hardware in hns3_restore_vlan_table * and hns3_restore_vlan_conf later. */ - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed) == 0) { ret = hns3_vlan_pvid_configure(hns, HNS3_INVALID_PVID, 0); if (ret) { hns3_err(hw, "pvid set fail in pf, ret =%d", ret); @@ -5059,7 +5063,8 @@ hns3_dev_start(struct rte_eth_dev *dev) int ret; PMD_INIT_FUNC_TRACE(); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) return -EBUSY; rte_spinlock_lock(&hw->lock); @@ -5150,7 +5155,8 @@ hns3_do_stop(struct hns3_adapter *hns) * during reset and is required to be released after the reset is * completed. */ - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed) == 0) hns3_dev_release_mbufs(hns); ret = hns3_cfg_mac_mode(hw, false); @@ -5158,7 +5164,8 @@ hns3_do_stop(struct hns3_adapter *hns) return ret; hw->mac.link_status = RTE_ETH_LINK_DOWN; - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed) == 0) { hns3_configure_all_mac_addr(hns, true); ret = hns3_reset_all_tqps(hns); if (ret) { @@ -5184,7 +5191,8 @@ hns3_dev_stop(struct rte_eth_dev *dev) hns3_stop_rxtx_datapath(dev); rte_spinlock_lock(&hw->lock); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed) == 0) { hns3_tm_dev_stop_proc(hw); hns3_config_mac_tnl_int(hw, false); hns3_stop_tqps(hw); @@ -5553,10 +5561,12 @@ hns3_detect_reset_event(struct hns3_hw *hw) last_req = hns3_get_reset_level(hns, &hw->reset.pending); vector0_intr_state = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG); if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_intr_state) { - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); new_req = HNS3_IMP_RESET; } else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_intr_state) { - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); new_req = HNS3_GLOBAL_RESET; } @@ -5744,7 +5754,8 @@ hns3_prepare_reset(struct hns3_adapter *hns) * any mailbox handling or command to firmware is only valid * after hns3_cmd_init is called. */ - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, + rte_memory_order_relaxed); hw->reset.stats.request_cnt++; break; case HNS3_IMP_RESET: @@ -5799,7 +5810,8 @@ hns3_stop_service(struct hns3_adapter *hns) * from table space. Hence, for function reset software intervention is * required to delete the entries */ - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed) == 0) hns3_configure_all_mc_mac_addr(hns, true); rte_spinlock_unlock(&hw->lock); @@ -5920,10 +5932,10 @@ hns3_reset_service(void *param) * The interrupt may have been lost. It is necessary to handle * the interrupt to recover from the error. */ - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) == - SCHEDULE_DEFERRED) { - __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED, - __ATOMIC_RELAXED); + if (rte_atomic_load_explicit(&hw->reset.schedule, + rte_memory_order_relaxed) == SCHEDULE_DEFERRED) { + rte_atomic_store_explicit(&hw->reset.schedule, + SCHEDULE_REQUESTED, rte_memory_order_relaxed); hns3_err(hw, "Handling interrupts in delayed tasks"); hns3_interrupt_handler(&rte_eth_devices[hw->data->port_id]); reset_level = hns3_get_reset_level(hns, &hw->reset.pending); @@ -5932,7 +5944,8 @@ hns3_reset_service(void *param) hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending); } } - __atomic_store_n(&hw->reset.schedule, SCHEDULE_NONE, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.schedule, SCHEDULE_NONE, + rte_memory_order_relaxed); /* * Check if there is any ongoing reset in the hardware. This status can @@ -6582,8 +6595,8 @@ hns3_dev_init(struct rte_eth_dev *eth_dev) hw->adapter_state = HNS3_NIC_INITIALIZED; - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) == - SCHEDULE_PENDING) { + if (rte_atomic_load_explicit(&hw->reset.schedule, + rte_memory_order_relaxed) == SCHEDULE_PENDING) { hns3_err(hw, "Reschedule reset service after dev_init"); hns3_schedule_reset(hns); } else { diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index 668f141e32ed..a0d62a5fd33f 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -999,20 +999,23 @@ hns3_atomic_test_bit(unsigned int nr, volatile uint64_t *addr) { uint64_t res; - res = (__atomic_load_n(addr, __ATOMIC_RELAXED) & (1UL << nr)) != 0; + res = (rte_atomic_load_explicit(addr, rte_memory_order_relaxed) & + (1UL << nr)) != 0; return res; } static inline void hns3_atomic_set_bit(unsigned int nr, volatile uint64_t *addr) { - __atomic_fetch_or(addr, (1UL << nr), __ATOMIC_RELAXED); + rte_atomic_fetch_or_explicit(addr, (1UL << nr), + rte_memory_order_relaxed); } static inline void hns3_atomic_clear_bit(unsigned int nr, volatile uint64_t *addr) { - __atomic_fetch_and(addr, ~(1UL << nr), __ATOMIC_RELAXED); + rte_atomic_fetch_and_explicit(addr, ~(1UL << nr), + rte_memory_order_relaxed); } static inline uint64_t @@ -1020,7 +1023,8 @@ hns3_test_and_clear_bit(unsigned int nr, volatile uint64_t *addr) { uint64_t mask = (1UL << nr); - return __atomic_fetch_and(addr, ~mask, __ATOMIC_RELAXED) & mask; + return rte_atomic_fetch_and_explicit(addr, + ~mask, rte_memory_order_relaxed) & mask; } int diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 156fb905f990..51d17ee8a726 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -478,7 +478,7 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) * MTU value issued by hns3 VF PMD must be less than or equal to * PF's MTU. */ - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed)) { hns3_err(hw, "Failed to set mtu during resetting"); return -EIO; } @@ -546,7 +546,7 @@ hns3vf_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval) rst_ing_reg = hns3_read_dev(hw, HNS3_FUN_RST_ING); hns3_warn(hw, "resetting reg: 0x%x", rst_ing_reg); hns3_atomic_set_bit(HNS3_VF_RESET, &hw->reset.pending); - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, rte_memory_order_relaxed); val = hns3_read_dev(hw, HNS3_VF_RST_ING); hns3_write_dev(hw, HNS3_VF_RST_ING, val | HNS3_VF_RST_ING_BIT); val = cmdq_stat_reg & ~BIT(HNS3_VECTOR0_RST_INT_B); @@ -618,8 +618,8 @@ hns3vf_update_push_lsc_cap(struct hns3_hw *hw, bool supported) struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); if (vf->pf_push_lsc_cap == HNS3_PF_PUSH_LSC_CAP_UNKNOWN) - __atomic_compare_exchange(&vf->pf_push_lsc_cap, &exp, &val, 0, - __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE); + rte_atomic_compare_exchange_strong_explicit(&vf->pf_push_lsc_cap, + &exp, &val, rte_memory_order_acquire, rte_memory_order_acquire); } static void @@ -633,8 +633,8 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) uint16_t exp = HNS3_PF_PUSH_LSC_CAP_UNKNOWN; struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); - __atomic_store_n(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, - __ATOMIC_RELEASE); + rte_atomic_store_explicit(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, + rte_memory_order_release); (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, NULL, 0); @@ -649,7 +649,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * mailbox from PF driver to get this capability. */ hns3_dev_handle_mbx_msg(hw); - if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) != + if (rte_atomic_load_explicit(&vf->pf_push_lsc_cap, rte_memory_order_acquire) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break; remain_ms--; @@ -660,10 +660,10 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * state: unknown (means pf not ack), not_supported, supported. * Here config it as 'not_supported' when it's 'unknown' state. */ - __atomic_compare_exchange(&vf->pf_push_lsc_cap, &exp, &val, 0, - __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE); + rte_atomic_compare_exchange_strong_explicit(&vf->pf_push_lsc_cap, &exp, + &val, rte_memory_order_acquire, rte_memory_order_acquire); - if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) == + if (rte_atomic_load_explicit(&vf->pf_push_lsc_cap, rte_memory_order_acquire) == HNS3_PF_PUSH_LSC_CAP_SUPPORTED) { hns3_info(hw, "detect PF support push link status change!"); } else { @@ -897,7 +897,7 @@ hns3vf_request_link_info(struct hns3_hw *hw) bool send_req; int ret; - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed)) return; send_req = vf->pf_push_lsc_cap == HNS3_PF_PUSH_LSC_CAP_NOT_SUPPORTED || @@ -933,7 +933,7 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, * sending request to PF kernel driver, then could update link status by * process PF kernel driver's link status mailbox message. */ - if (!__atomic_load_n(&vf->poll_job_started, __ATOMIC_RELAXED)) + if (!rte_atomic_load_explicit(&vf->poll_job_started, rte_memory_order_relaxed)) return; if (hw->adapter_state != HNS3_NIC_STARTED) @@ -972,7 +972,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) struct hns3_hw *hw = &hns->hw; int ret; - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed)) { hns3_err(hw, "vf set vlan id failed during resetting, vlan_id =%u", vlan_id); @@ -1032,7 +1032,7 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask) unsigned int tmp_mask; int ret = 0; - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed)) { hns3_err(hw, "vf set vlan offload failed during resetting, mask = 0x%x", mask); return -EIO; @@ -1222,7 +1222,7 @@ hns3vf_start_poll_job(struct rte_eth_dev *dev) if (vf->pf_push_lsc_cap == HNS3_PF_PUSH_LSC_CAP_SUPPORTED) vf->req_link_info_cnt = HNS3_REQUEST_LINK_INFO_REMAINS_CNT; - __atomic_store_n(&vf->poll_job_started, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&vf->poll_job_started, 1, rte_memory_order_relaxed); hns3vf_service_handler(dev); } @@ -1234,7 +1234,7 @@ hns3vf_stop_poll_job(struct rte_eth_dev *dev) rte_eal_alarm_cancel(hns3vf_service_handler, dev); - __atomic_store_n(&vf->poll_job_started, 0, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&vf->poll_job_started, 0, rte_memory_order_relaxed); } static int @@ -1468,10 +1468,10 @@ hns3vf_do_stop(struct hns3_adapter *hns) * during reset and is required to be released after the reset is * completed. */ - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed) == 0) hns3_dev_release_mbufs(hns); - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, rte_memory_order_relaxed) == 0) { hns3_configure_all_mac_addr(hns, true); ret = hns3_reset_all_tqps(hns); if (ret) { @@ -1496,7 +1496,7 @@ hns3vf_dev_stop(struct rte_eth_dev *dev) hns3_stop_rxtx_datapath(dev); rte_spinlock_lock(&hw->lock); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) { + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed) == 0) { hns3_stop_tqps(hw); hns3vf_do_stop(hns); hns3_unmap_rx_interrupt(dev); @@ -1611,7 +1611,7 @@ hns3vf_dev_start(struct rte_eth_dev *dev) int ret; PMD_INIT_FUNC_TRACE(); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) + if (rte_atomic_load_explicit(&hw->reset.resetting, rte_memory_order_relaxed)) return -EBUSY; rte_spinlock_lock(&hw->lock); @@ -1795,7 +1795,7 @@ hns3vf_prepare_reset(struct hns3_adapter *hns) if (ret) return ret; } - __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 1, rte_memory_order_relaxed); return 0; } @@ -1836,7 +1836,7 @@ hns3vf_stop_service(struct hns3_adapter *hns) * from table space. Hence, for function reset software intervention is * required to delete the entries. */ - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, rte_memory_order_relaxed) == 0) hns3_configure_all_mc_mac_addr(hns, true); rte_spinlock_unlock(&hw->lock); @@ -2018,10 +2018,10 @@ hns3vf_reset_service(void *param) * The interrupt may have been lost. It is necessary to handle * the interrupt to recover from the error. */ - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) == + if (rte_atomic_load_explicit(&hw->reset.schedule, rte_memory_order_relaxed) == SCHEDULE_DEFERRED) { - __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.schedule, SCHEDULE_REQUESTED, + rte_memory_order_relaxed); hns3_err(hw, "Handling interrupts in delayed tasks"); hns3vf_interrupt_handler(&rte_eth_devices[hw->data->port_id]); reset_level = hns3vf_get_reset_level(hw, &hw->reset.pending); @@ -2030,7 +2030,7 @@ hns3vf_reset_service(void *param) hns3_atomic_set_bit(HNS3_VF_RESET, &hw->reset.pending); } } - __atomic_store_n(&hw->reset.schedule, SCHEDULE_NONE, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.schedule, SCHEDULE_NONE, rte_memory_order_relaxed); /* * Hardware reset has been notified, we now have to poll & check if @@ -2225,8 +2225,9 @@ hns3vf_dev_init(struct rte_eth_dev *eth_dev) hw->adapter_state = HNS3_NIC_INITIALIZED; - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) == - SCHEDULE_PENDING) { + if (rte_atomic_load_explicit(&hw->reset.schedule, + rte_memory_order_relaxed) == + SCHEDULE_PENDING) { hns3_err(hw, "Reschedule reset service after dev_init"); hns3_schedule_reset(hns); } else { diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c index c5a3e3797cbd..cb758cf3a9b7 100644 --- a/drivers/net/hns3/hns3_intr.c +++ b/drivers/net/hns3/hns3_intr.c @@ -2402,7 +2402,8 @@ hns3_reset_init(struct hns3_hw *hw) hw->reset.request = 0; hw->reset.pending = 0; hw->reset.resetting = 0; - __atomic_store_n(&hw->reset.disable_cmd, 0, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.disable_cmd, 0, + rte_memory_order_relaxed); hw->reset.wait_data = rte_zmalloc("wait_data", sizeof(struct hns3_wait_data), 0); if (!hw->reset.wait_data) { @@ -2419,8 +2420,8 @@ hns3_schedule_reset(struct hns3_adapter *hns) /* Reschedule the reset process after successful initialization */ if (hw->adapter_state == HNS3_NIC_UNINITIALIZED) { - __atomic_store_n(&hw->reset.schedule, SCHEDULE_PENDING, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.schedule, SCHEDULE_PENDING, + rte_memory_order_relaxed); return; } @@ -2428,15 +2429,15 @@ hns3_schedule_reset(struct hns3_adapter *hns) return; /* Schedule restart alarm if it is not scheduled yet */ - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) == - SCHEDULE_REQUESTED) + if (rte_atomic_load_explicit(&hw->reset.schedule, + rte_memory_order_relaxed) == SCHEDULE_REQUESTED) return; - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) == - SCHEDULE_DEFERRED) + if (rte_atomic_load_explicit(&hw->reset.schedule, + rte_memory_order_relaxed) == SCHEDULE_DEFERRED) rte_eal_alarm_cancel(hw->reset.ops->reset_service, hns); - __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.schedule, SCHEDULE_REQUESTED, + rte_memory_order_relaxed); rte_eal_alarm_set(SWITCH_CONTEXT_US, hw->reset.ops->reset_service, hns); } @@ -2453,11 +2454,11 @@ hns3_schedule_delayed_reset(struct hns3_adapter *hns) return; } - if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) != - SCHEDULE_NONE) + if (rte_atomic_load_explicit(&hw->reset.schedule, + rte_memory_order_relaxed) != SCHEDULE_NONE) return; - __atomic_store_n(&hw->reset.schedule, SCHEDULE_DEFERRED, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hw->reset.schedule, SCHEDULE_DEFERRED, + rte_memory_order_relaxed); rte_eal_alarm_set(DEFERRED_SCHED_US, hw->reset.ops->reset_service, hns); } @@ -2633,7 +2634,8 @@ hns3_reset_err_handle(struct hns3_adapter *hns) * Regardless of whether the execution is successful or not, the * flow after execution must be continued. */ - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) (void)hns3_cmd_init(hw); reset_fail: hw->reset.attempts = 0; @@ -2661,7 +2663,8 @@ hns3_reset_pre(struct hns3_adapter *hns) int ret; if (hw->reset.stage == RESET_STAGE_NONE) { - __atomic_store_n(&hns->hw.reset.resetting, 1, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hns->hw.reset.resetting, 1, + rte_memory_order_relaxed); hw->reset.stage = RESET_STAGE_DOWN; hns3_report_reset_begin(hw); ret = hw->reset.ops->stop_service(hns); @@ -2750,7 +2753,8 @@ hns3_reset_post(struct hns3_adapter *hns) hns3_notify_reset_ready(hw, false); hns3_clear_reset_level(hw, &hw->reset.pending); hns3_clear_reset_event(hw); - __atomic_store_n(&hns->hw.reset.resetting, 0, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hns->hw.reset.resetting, 0, + rte_memory_order_relaxed); hw->reset.attempts = 0; hw->reset.stats.success_cnt++; hw->reset.stage = RESET_STAGE_NONE; @@ -2812,7 +2816,8 @@ hns3_reset_fail_handle(struct hns3_adapter *hns) hw->reset.mbuf_deferred_free = false; } rte_spinlock_unlock(&hw->lock); - __atomic_store_n(&hns->hw.reset.resetting, 0, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&hns->hw.reset.resetting, 0, + rte_memory_order_relaxed); hw->reset.stage = RESET_STAGE_NONE; hns3_clock_gettime(&tv); timersub(&tv, &hw->reset.start_time, &tv_delta); diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index f1743c195efa..7af56ff23deb 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -59,7 +59,8 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, mbx_time_limit = (uint32_t)hns->mbx_time_limit_ms * US_PER_MS; while (wait_time < mbx_time_limit) { - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) { hns3_err(hw, "Don't wait for mbx response because of " "disable_cmd"); return -EBUSY; @@ -425,7 +426,8 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) } while (!hns3_cmd_crq_empty(hw)) { - if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) { rte_spinlock_unlock(&hw->cmq.crq.lock); return; } diff --git a/drivers/net/hns3/hns3_mp.c b/drivers/net/hns3/hns3_mp.c index 556f1941c6b2..8ee97a7c598a 100644 --- a/drivers/net/hns3/hns3_mp.c +++ b/drivers/net/hns3/hns3_mp.c @@ -151,7 +151,8 @@ mp_req_on_rxtx(struct rte_eth_dev *dev, enum hns3_mp_req_type type) int i; if (rte_eal_process_type() == RTE_PROC_SECONDARY || - __atomic_load_n(&hw->secondary_cnt, __ATOMIC_RELAXED) == 0) + rte_atomic_load_explicit(&hw->secondary_cnt, + rte_memory_order_relaxed) == 0) return; if (!mp_req_type_is_valid(type)) { @@ -277,7 +278,8 @@ hns3_mp_init(struct rte_eth_dev *dev) ret); return ret; } - __atomic_fetch_add(&hw->secondary_cnt, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&hw->secondary_cnt, 1, + rte_memory_order_relaxed); } else { ret = hns3_mp_init_primary(); if (ret) { @@ -297,7 +299,8 @@ void hns3_mp_uninit(struct rte_eth_dev *dev) struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_eal_process_type() != RTE_PROC_PRIMARY) - __atomic_fetch_sub(&hw->secondary_cnt, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_sub_explicit(&hw->secondary_cnt, 1, + rte_memory_order_relaxed); process_data.eth_dev_cnt--; if (process_data.eth_dev_cnt == 0) { diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 09b7e90c7000..bb600475e91e 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -4465,7 +4465,8 @@ hns3_set_rxtx_function(struct rte_eth_dev *eth_dev) struct hns3_adapter *hns = eth_dev->data->dev_private; if (hns->hw.adapter_state == HNS3_NIC_STARTED && - __atomic_load_n(&hns->hw.reset.resetting, __ATOMIC_RELAXED) == 0) { + rte_atomic_load_explicit(&hns->hw.reset.resetting, + rte_memory_order_relaxed) == 0) { eth_dev->rx_pkt_burst = hns3_get_rx_function(eth_dev); eth_dev->rx_descriptor_status = hns3_dev_rx_descriptor_status; eth_dev->tx_pkt_burst = hw->set_link_down ? @@ -4531,7 +4532,8 @@ hns3_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_spinlock_lock(&hw->lock); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) { hns3_err(hw, "fail to start Rx queue during resetting."); rte_spinlock_unlock(&hw->lock); return -EIO; @@ -4587,7 +4589,8 @@ hns3_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) rte_spinlock_lock(&hw->lock); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) { hns3_err(hw, "fail to stop Rx queue during resetting."); rte_spinlock_unlock(&hw->lock); return -EIO; @@ -4616,7 +4619,8 @@ hns3_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) rte_spinlock_lock(&hw->lock); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) { hns3_err(hw, "fail to start Tx queue during resetting."); rte_spinlock_unlock(&hw->lock); return -EIO; @@ -4649,7 +4653,8 @@ hns3_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) rte_spinlock_lock(&hw->lock); - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) { hns3_err(hw, "fail to stop Tx queue during resetting."); rte_spinlock_unlock(&hw->lock); return -EIO; diff --git a/drivers/net/hns3/hns3_tm.c b/drivers/net/hns3/hns3_tm.c index d9691640140b..656db9b170b2 100644 --- a/drivers/net/hns3/hns3_tm.c +++ b/drivers/net/hns3/hns3_tm.c @@ -1051,7 +1051,8 @@ hns3_tm_hierarchy_commit(struct rte_eth_dev *dev, if (error == NULL) return -EINVAL; - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; error->message = "device is resetting"; /* don't goto fail_clear, user may try later */ @@ -1141,7 +1142,8 @@ hns3_tm_node_shaper_update(struct rte_eth_dev *dev, if (error == NULL) return -EINVAL; - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED)) { + if (rte_atomic_load_explicit(&hw->reset.resetting, + rte_memory_order_relaxed)) { error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; error->message = "device is resetting"; return -EBUSY; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 3/6] net/hns3: refactor VF mailbox message struct 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai 2023-11-10 6:13 ` [PATCH v2 1/6] net/hns3: fix sync mailbox failure forever Jie Hai 2023-11-10 6:13 ` [PATCH v2 2/6] net/hns3: use stdatomic API Jie Hai @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 6:13 ` [PATCH v2 4/6] net/hns3: refactor PF " Jie Hai ` (3 subsequent siblings) 6 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit, Yisen Zhuang, Chunsong Feng, Huisong Li, Min Hu (Connor), Wei Hu (Xavier), Hao Chen Cc: fengchengwen, liudongdong3 From: Dengdui Huang <huangdengdui@huawei.com> The data region in VF to PF mbx message command is used to communicate with PF driver. And this data region exists as an array. As a result, some complicated feature commands, like setting promisc mode, map/unmap ring vector and setting VLAN id, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_vf_to_pf_msg structure. In addition, the PF link change event message is reported by the firmware and is reported in hns3_mbx_vf_to_pf_cmd format, it also needs to be modified. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 54 +++++++++++++--------------- drivers/net/hns3/hns3_mbx.c | 24 ++++++------- drivers/net/hns3/hns3_mbx.h | 58 +++++++++++++++++++++++-------- 3 files changed, 78 insertions(+), 58 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 51d17ee8a726..3cc780985716 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -254,11 +254,12 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; - req->msg[1] = en_bc_pmc ? 1 : 0; - req->msg[2] = en_uc_pmc ? 1 : 0; - req->msg[3] = en_mc_pmc ? 1 : 0; - req->msg[4] = hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; + req->msg.code = HNS3_MBX_SET_PROMISC_MODE; + req->msg.en_bc = en_bc_pmc ? 1 : 0; + req->msg.en_uc = en_uc_pmc ? 1 : 0; + req->msg.en_mc = en_mc_pmc ? 1 : 0; + req->msg.en_limit_promisc = + hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; ret = hns3_cmd_send(hw, &desc, 1); if (ret) @@ -347,30 +348,28 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { - struct hns3_vf_bind_vector_msg bind_msg; +#define HNS3_RING_VECTOR_DATA_SIZE 14 + struct hns3_vf_to_pf_msg req = {0}; const char *op_str; - uint16_t code; int ret; - memset(&bind_msg, 0, sizeof(bind_msg)); - code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : + req.code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : HNS3_MBX_UNMAP_RING_TO_VECTOR; - bind_msg.vector_id = (uint8_t)vector_id; + req.vector_id = (uint8_t)vector_id; + req.ring_num = 1; if (queue_type == HNS3_RING_TYPE_RX) - bind_msg.param[0].int_gl_index = HNS3_RING_GL_RX; + req.ring_param[0].int_gl_index = HNS3_RING_GL_RX; else - bind_msg.param[0].int_gl_index = HNS3_RING_GL_TX; - - bind_msg.param[0].ring_type = queue_type; - bind_msg.ring_num = 1; - bind_msg.param[0].tqp_index = queue_id; + req.ring_param[0].int_gl_index = HNS3_RING_GL_TX; + req.ring_param[0].ring_type = queue_type; + req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, code, 0, (uint8_t *)&bind_msg, - sizeof(bind_msg), false, NULL, 0); + ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, + HNS3_RING_VECTOR_DATA_SIZE, false, NULL, 0); if (ret) - hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret is %d.", - op_str, queue_id, bind_msg.vector_id, ret); + hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", + op_str, queue_id, req.vector_id, ret); return ret; } @@ -950,19 +949,16 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { -#define HNS3VF_VLAN_MBX_MSG_LEN 5 + struct hns3_mbx_vlan_filter vlan_filter = {0}; struct hns3_hw *hw = &hns->hw; - uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN]; - uint16_t proto = htons(RTE_ETHER_TYPE_VLAN); - uint8_t is_kill = on ? 0 : 1; - msg_data[0] = is_kill; - memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id)); - memcpy(&msg_data[3], &proto, sizeof(proto)); + vlan_filter.is_kill = on ? 0 : 1; + vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - msg_data, HNS3VF_VLAN_MBX_MSG_LEN, true, NULL, - 0); + (uint8_t *)&vlan_filter, sizeof(vlan_filter), + true, NULL, 0); } static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 7af56ff23deb..144c9f4c3b0f 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -11,8 +11,6 @@ #include "hns3_intr.h" #include "hns3_rxtx.h" -#define HNS3_CMD_CODE_OFFSET 2 - static const struct errno_respcode_map err_code_map[] = { {0, 0}, {1, -EPERM}, @@ -128,29 +126,30 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_mbx_vf_to_pf_cmd *req; struct hns3_cmd_desc desc; bool is_ring_vector_msg; - int offset; int ret; req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; /* first two bytes are reserved for code & subcode */ - if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) { + if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { hns3_err(hw, "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET); + msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); return -EINVAL; } hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = code; + req->msg.code = code; is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || (code == HNS3_MBX_GET_RING_VECTOR_MAP); if (!is_ring_vector_msg) - req->msg[1] = subcode; + req->msg.subcode = subcode; if (msg_data) { - offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET; - memcpy(&req->msg[offset], msg_data, msg_len); + if (is_ring_vector_msg) + memcpy(&req->msg.vector_id, msg_data, msg_len); + else + memcpy(&req->msg.data, msg_data, msg_len); } /* synchronous send */ @@ -297,11 +296,8 @@ static void hns3pf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_vf_to_pf_cmd *req) { -#define LINK_STATUS_OFFSET 1 -#define LINK_FAIL_CODE_OFFSET 2 - - if (!req->msg[LINK_STATUS_OFFSET]) - hns3_link_fail_parse(hw, req->msg[LINK_FAIL_CODE_OFFSET]); + if (!req->msg.link_status) + hns3_link_fail_parse(hw, req->msg.link_fail_code); hns3_update_linkstatus_and_event(hw, true); } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 4a328802b920..3f623ba64ca4 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -89,7 +89,6 @@ enum hns3_mbx_link_fail_subcode { HNS3_MBX_LF_XSFP_ABSENT, }; -#define HNS3_MBX_MAX_MSG_SIZE 16 #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 @@ -107,6 +106,48 @@ struct hns3_mbx_resp_status { uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE]; }; +struct hns3_ring_chain_param { + uint8_t ring_type; + uint8_t tqp_index; + uint8_t int_gl_index; +}; + +#pragma pack(1) +struct hns3_mbx_vlan_filter { + uint8_t is_kill; + uint16_t vlan_id; + uint16_t proto; +}; +#pragma pack() + +#define HNS3_MBX_MSG_MAX_DATA_SIZE 14 +#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 +struct hns3_vf_to_pf_msg { + uint8_t code; + union { + struct { + uint8_t subcode; + uint8_t data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; + struct { + uint8_t en_bc; + uint8_t en_uc; + uint8_t en_mc; + uint8_t en_limit_promisc; + }; + struct { + uint8_t vector_id; + uint8_t ring_num; + struct hns3_ring_chain_param + ring_param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; + }; + struct { + uint8_t link_status; + uint8_t link_fail_code; + }; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -122,7 +163,7 @@ struct hns3_mbx_vf_to_pf_cmd { uint8_t msg_len; uint8_t rsv2; uint16_t match_id; - uint8_t msg[HNS3_MBX_MAX_MSG_SIZE]; + struct hns3_vf_to_pf_msg msg; }; struct hns3_mbx_pf_to_vf_cmd { @@ -134,19 +175,6 @@ struct hns3_mbx_pf_to_vf_cmd { uint16_t msg[8]; }; -struct hns3_ring_chain_param { - uint8_t ring_type; - uint8_t tqp_index; - uint8_t int_gl_index; -}; - -#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 -struct hns3_vf_bind_vector_msg { - uint8_t vector_id; - uint8_t ring_num; - struct hns3_ring_chain_param param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; -}; - struct hns3_pf_rst_done_cmd { uint8_t pf_rst_done; uint8_t rsv[23]; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 4/6] net/hns3: refactor PF mailbox message struct 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai ` (2 preceding siblings ...) 2023-11-10 6:13 ` [PATCH v2 3/6] net/hns3: refactor VF mailbox message struct Jie Hai @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 6:13 ` [PATCH v2 5/6] net/hns3: refactor send mailbox function Jie Hai ` (2 subsequent siblings) 6 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit, Yisen Zhuang, Chunsong Feng, Hao Chen, Min Hu (Connor), Huisong Li, Wei Hu (Xavier) Cc: fengchengwen, liudongdong3 From: Dengdui Huang <huangdengdui@huawei.com> The data region in PF to VF mbx message command is used to communicate with VF driver. And this data region exists as an array. As a result, some complicated feature commands, like mailbox response, link change event, close promisc mode, reset request and update pvid state, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_pf_to_vf_msg structure. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_mbx.c | 38 ++++++++++++++++++------------------- drivers/net/hns3/hns3_mbx.h | 25 +++++++++++++++++++++++- 2 files changed, 43 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 144c9f4c3b0f..2c1f90b1ae01 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -193,17 +193,17 @@ static void hns3vf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { + struct hns3_mbx_link_status *link_info = + (struct hns3_mbx_link_status *)req->msg.msg_data; uint8_t link_status, link_duplex; - uint16_t *msg_q = req->msg; uint8_t support_push_lsc; uint32_t link_speed; - memcpy(&link_speed, &msg_q[2], sizeof(link_speed)); - link_status = rte_le_to_cpu_16(msg_q[1]); - link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]); - hns3vf_update_link_status(hw, link_status, link_speed, - link_duplex); - support_push_lsc = (*(uint8_t *)&msg_q[5]) & 1u; + link_status = (uint8_t)rte_le_to_cpu_16(link_info->link_status); + link_speed = rte_le_to_cpu_32(link_info->speed); + link_duplex = (uint8_t)rte_le_to_cpu_16(link_info->duplex); + hns3vf_update_link_status(hw, link_status, link_speed, link_duplex); + support_push_lsc = (link_info->flag) & 1u; hns3vf_update_push_lsc_cap(hw, support_push_lsc); } @@ -212,7 +212,6 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { enum hns3_reset_level reset_level; - uint16_t *msg_q = req->msg; /* * PF has asserted reset hence VF should go in pending @@ -220,7 +219,7 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, * has been completely reset. After this stack should * eventually be re-initialized. */ - reset_level = rte_le_to_cpu_16(msg_q[1]); + reset_level = rte_le_to_cpu_16(req->msg.reset_level); hns3_atomic_set_bit(reset_level, &hw->reset.pending); hns3_warn(hw, "PF inform reset level %d", reset_level); @@ -242,8 +241,9 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * to match the request. */ if (req->match_id == resp->match_id) { - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = + hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -256,7 +256,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ - msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + msg_data = (uint32_t)req->msg.vf_mbx_msg_code << + HNS3_MBX_RESP_CODE_OFFSET | req->msg.vf_mbx_msg_subcode; if (resp->req_msg_data != msg_data) { hns3_warn(hw, "received response tag (%u) is mismatched with requested tag (%u)", @@ -264,8 +265,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) return; } - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -306,8 +307,7 @@ static void hns3_update_port_base_vlan_info(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { -#define PVID_STATE_OFFSET 1 - uint16_t new_pvid_state = req->msg[PVID_STATE_OFFSET] ? + uint16_t new_pvid_state = req->msg.pvid_state ? HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE; /* * Currently, hardware doesn't support more than two layers VLAN offload @@ -356,7 +356,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) while (next_to_use != tail) { desc = &crq->desc[next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[next_to_use].flag); if (!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B)) @@ -430,7 +430,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) desc = &crq->desc[crq->next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { @@ -486,7 +486,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) * hns3 PF kernel driver, VF driver will receive this * mailbox message from PF driver. */ - hns3_handle_promisc_info(hw, req->msg[1]); + hns3_handle_promisc_info(hw, req->msg.promisc_en); break; default: hns3_err(hw, "received unsupported(%u) mbx msg", diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 3f623ba64ca4..64f30d2923ea 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -118,6 +118,13 @@ struct hns3_mbx_vlan_filter { uint16_t vlan_id; uint16_t proto; }; + +struct hns3_mbx_link_status { + uint16_t link_status; + uint32_t speed; + uint16_t duplex; + uint8_t flag; +}; #pragma pack() #define HNS3_MBX_MSG_MAX_DATA_SIZE 14 @@ -148,6 +155,22 @@ struct hns3_vf_to_pf_msg { }; }; +struct hns3_pf_to_vf_msg { + uint16_t code; + union { + struct { + uint16_t vf_mbx_msg_code; + uint16_t vf_mbx_msg_subcode; + uint16_t resp_status; + uint8_t resp_data[HNS3_MBX_MAX_RESP_DATA_SIZE]; + }; + uint16_t promisc_en; + uint16_t reset_level; + uint16_t pvid_state; + uint8_t msg_data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -172,7 +195,7 @@ struct hns3_mbx_pf_to_vf_cmd { uint8_t msg_len; uint8_t rsv1; uint16_t match_id; - uint16_t msg[8]; + struct hns3_pf_to_vf_msg msg; }; struct hns3_pf_rst_done_cmd { -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 5/6] net/hns3: refactor send mailbox function 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai ` (3 preceding siblings ...) 2023-11-10 6:13 ` [PATCH v2 4/6] net/hns3: refactor PF " Jie Hai @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 16:23 ` Ferruh Yigit 2023-11-10 6:13 ` [PATCH v2 6/6] net/hns3: refactor handle " Jie Hai 2023-11-10 16:12 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Ferruh Yigit 6 siblings, 1 reply; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit, Yisen Zhuang, Ferruh Yigit, Wei Hu (Xavier), Chunsong Feng, Min Hu (Connor), Huisong Li Cc: fengchengwen, liudongdong3 From: Dengdui Huang <huangdengdui@huawei.com> The 'hns3_send_mbx_msg' function has following problem: 1. the name is vague, missing caller indication. 2. too many input parameters because the filling messages are placed in commands the send command. Therefore, a common interface is encapsulated to fill in the mailbox message before sending it. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 140 ++++++++++++++++++------------ drivers/net/hns3/hns3_mbx.c | 50 ++++------- drivers/net/hns3/hns3_mbx.h | 8 +- drivers/net/hns3/hns3_rxtx.c | 18 ++-- 4 files changed, 115 insertions(+), 101 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 3cc780985716..f4c4c67e977e 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -91,11 +91,13 @@ hns3vf_add_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes, - RTE_ETHER_ADDR_LEN, false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -110,12 +112,13 @@ hns3vf_remove_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -134,6 +137,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *old_addr; uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; /* @@ -146,9 +150,10 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes, RTE_ETHER_ADDR_LEN); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes, - HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_MODIFY); + memcpy(req.data, addr_bytes, HNS3_TWO_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) { /* * The hns3 VF PMD depends on the hns3 PF kernel ethdev @@ -185,12 +190,13 @@ hns3vf_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_ADD, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -206,12 +212,13 @@ hns3vf_remove_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -348,7 +355,6 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { -#define HNS3_RING_VECTOR_DATA_SIZE 14 struct hns3_vf_to_pf_msg req = {0}; const char *op_str; int ret; @@ -365,8 +371,7 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, req.ring_param[0].ring_type = queue_type; req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, - HNS3_RING_VECTOR_DATA_SIZE, false, NULL, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", op_str, queue_id, req.vector_id, ret); @@ -452,10 +457,12 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) static int hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu) { + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu, - sizeof(mtu), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MTU, 0); + memcpy(req.data, &mtu, sizeof(mtu)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret); @@ -635,8 +642,8 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) rte_atomic_store_explicit(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, rte_memory_order_release); - (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + (void)hns3vf_mbx_send(hw, &req, false, NULL, 0); while (remain_ms > 0) { rte_delay_ms(HNS3_POLL_RESPONE_MS); @@ -731,12 +738,13 @@ hns3vf_check_tqp_info(struct hns3_hw *hw) static int hns3vf_get_port_base_vlan_filter_state(struct hns3_hw *hw) { + struct hns3_vf_to_pf_msg req; uint8_t resp_msg; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_GET_PORT_BASE_VLAN_STATE, NULL, 0, - true, &resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_GET_PORT_BASE_VLAN_STATE); + ret = hns3vf_mbx_send(hw, &req, true, &resp_msg, sizeof(resp_msg)); if (ret) { if (ret == -ETIME) { /* @@ -777,10 +785,12 @@ hns3vf_get_queue_info(struct hns3_hw *hw) { #define HNS3VF_TQPS_RSS_INFO_LEN 6 uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true, - resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_QINFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, + resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); if (ret) { PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret); return ret; @@ -818,10 +828,11 @@ hns3vf_get_basic_info(struct hns3_hw *hw) { uint8_t resp_msg[HNS3_MBX_MAX_RESP_DATA_SIZE]; struct hns3_basic_info *basic_info; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_BASIC_INFO, 0, NULL, 0, - true, resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_BASIC_INFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, resp_msg, sizeof(resp_msg)); if (ret) { hns3_err(hw, "failed to get basic info from PF, ret = %d.", ret); @@ -841,10 +852,11 @@ static int hns3vf_get_host_mac_addr(struct hns3_hw *hw) { uint8_t host_mac[RTE_ETHER_ADDR_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0, - true, host_mac, RTE_ETHER_ADDR_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_MAC_ADDR, 0); + ret = hns3vf_mbx_send(hw, &req, true, host_mac, RTE_ETHER_ADDR_LEN); if (ret) { hns3_err(hw, "Failed to get mac addr from PF: %d", ret); return ret; @@ -893,6 +905,7 @@ static void hns3vf_request_link_info(struct hns3_hw *hw) { struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; bool send_req; int ret; @@ -904,8 +917,8 @@ hns3vf_request_link_info(struct hns3_hw *hw) if (!send_req) return; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_err(hw, "failed to fetch link status, ret = %d", ret); return; @@ -949,16 +962,18 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { - struct hns3_mbx_vlan_filter vlan_filter = {0}; + struct hns3_mbx_vlan_filter *vlan_filter; + struct hns3_vf_to_pf_msg req = {0}; struct hns3_hw *hw = &hns->hw; - vlan_filter.is_kill = on ? 0 : 1; - vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); - vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); + req.code = HNS3_MBX_SET_VLAN; + req.subcode = HNS3_MBX_VLAN_FILTER; + vlan_filter = (struct hns3_mbx_vlan_filter *)req.data; + vlan_filter->is_kill = on ? 0 : 1; + vlan_filter->proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter->vlan_id = rte_cpu_to_le_16(vlan_id); - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - (uint8_t *)&vlan_filter, sizeof(vlan_filter), - true, NULL, 0); + return hns3vf_mbx_send(hw, &req, true, NULL, 0); } static int @@ -987,6 +1002,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) static int hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; @@ -994,9 +1010,10 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) return 0; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_ENABLE_VLAN_FILTER, &msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_ENABLE_VLAN_FILTER); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "%s vlan filter failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1007,12 +1024,15 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) static int hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG, - &msg_data, sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_VLAN_RX_OFF_CFG); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "vf %s strip failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1156,11 +1176,13 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev) static int hns3vf_set_alive(struct hns3_hw *hw, bool alive) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; msg_data = alive ? 1 : 0; - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data, - sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_ALIVE, 0); + memcpy(req.data, &msg_data, sizeof(msg_data)); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static void @@ -1168,11 +1190,12 @@ hns3vf_keep_alive_handler(void *param) { struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; struct hns3_adapter *hns = eth_dev->data->dev_private; + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_KEEP_ALIVE, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "VF sends keeping alive cmd failed(=%d)", ret); @@ -1311,9 +1334,11 @@ hns3vf_init_hardware(struct hns3_adapter *hns) static int hns3vf_clear_vport_list(struct hns3_hw *hw) { - return hns3_send_mbx_msg(hw, HNS3_MBX_HANDLE_VF_TBL, - HNS3_MBX_VPORT_LIST_CLEAR, NULL, 0, false, - NULL, 0); + struct hns3_vf_to_pf_msg req; + + hns3vf_mbx_setup(&req, HNS3_MBX_HANDLE_VF_TBL, + HNS3_MBX_VPORT_LIST_CLEAR); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static int @@ -1782,12 +1807,13 @@ hns3vf_wait_hardware_ready(struct hns3_adapter *hns) static int hns3vf_prepare_reset(struct hns3_adapter *hns) { + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; if (hw->reset.level == HNS3_VF_FUNC_RESET) { - ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL, - 0, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) return ret; } diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 2c1f90b1ae01..278302135804 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -24,6 +24,14 @@ static const struct errno_respcode_map err_code_map[] = { {95, -EOPNOTSUPP}, }; +void +hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode) +{ + memset(req, 0, sizeof(struct hns3_vf_to_pf_msg)); + req->code = code; + req->subcode = subcode; +} + static int hns3_resp_to_errno(uint16_t resp_code) { @@ -119,45 +127,24 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) } int -hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len) +hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req, bool need_resp, + uint8_t *resp_data, uint16_t resp_len) { - struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_mbx_vf_to_pf_cmd *cmd; struct hns3_cmd_desc desc; - bool is_ring_vector_msg; int ret; - req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; - - /* first two bytes are reserved for code & subcode */ - if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { - hns3_err(hw, - "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); - return -EINVAL; - } - hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg.code = code; - is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || - (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || - (code == HNS3_MBX_GET_RING_VECTOR_MAP); - if (!is_ring_vector_msg) - req->msg.subcode = subcode; - if (msg_data) { - if (is_ring_vector_msg) - memcpy(&req->msg.vector_id, msg_data, msg_len); - else - memcpy(&req->msg.data, msg_data, msg_len); - } + cmd = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; + cmd->msg = *req; /* synchronous send */ if (need_resp) { - req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; + cmd->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; rte_spinlock_lock(&hw->mbx_resp.lock); - hns3_mbx_prepare_resp(hw, code, subcode); - req->match_id = hw->mbx_resp.match_id; + hns3_mbx_prepare_resp(hw, req->code, req->subcode); + cmd->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { rte_spinlock_unlock(&hw->mbx_resp.lock); @@ -166,7 +153,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return ret; } - ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len); + ret = hns3_get_mbx_resp(hw, req->code, req->subcode, + resp_data, resp_len); rte_spinlock_unlock(&hw->mbx_resp.lock); } else { /* asynchronous send */ diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 64f30d2923ea..360e91c30eb9 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -210,7 +210,9 @@ struct hns3_pf_rst_done_cmd { struct hns3_hw; void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); -int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len); +void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, + uint8_t code, uint8_t subcode); +int hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req_msg, bool need_resp, + uint8_t *resp_data, uint16_t resp_len); #endif /* HNS3_MBX_H */ diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index bb600475e91e..32cb314cd9e5 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -686,13 +686,12 @@ hns3pf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) static int hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) { - uint8_t msg_data[2]; + struct hns3_vf_to_pf_msg req; int ret; - memcpy(msg_data, &queue_id, sizeof(uint16_t)); - - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + memcpy(req.data, &queue_id, sizeof(uint16_t)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "fail to reset tqp, queue_id = %u, ret = %d.", queue_id, ret); @@ -769,15 +768,14 @@ static int hns3vf_reset_all_tqps(struct hns3_hw *hw) { #define HNS3VF_RESET_ALL_TQP_DONE 1U + struct hns3_vf_to_pf_msg req; uint8_t reset_status; - uint8_t msg_data[2]; int ret; uint16_t i; - memset(msg_data, 0, sizeof(msg_data)); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, &reset_status, - sizeof(reset_status)); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, + &reset_status, sizeof(reset_status)); if (ret) { hns3_err(hw, "fail to send rcb reset mbx, ret = %d.", ret); return ret; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v2 5/6] net/hns3: refactor send mailbox function 2023-11-10 6:13 ` [PATCH v2 5/6] net/hns3: refactor send mailbox function Jie Hai @ 2023-11-10 16:23 ` Ferruh Yigit 0 siblings, 0 replies; 33+ messages in thread From: Ferruh Yigit @ 2023-11-10 16:23 UTC (permalink / raw) To: Jie Hai, dev, Yisen Zhuang, Ferruh Yigit, Wei Hu (Xavier), Chunsong Feng, Min Hu (Connor), Huisong Li Cc: fengchengwen, liudongdong3 On 11/10/2023 6:13 AM, Jie Hai wrote: > From: Dengdui Huang <huangdengdui@huawei.com> > > The 'hns3_send_mbx_msg' function has following problem: > 1. the name is vague, missing caller indication. > 2. too many input parameters because the filling messages > are placed in commands the send command. > > Therefore, a common interface is encapsulated to fill in > the mailbox message before sending it. > > Fixes: 463e748964f5 ("net/hns3: support mailbox") > Cc: stable@dpdk.org > > Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> > Signed-off-by: Jie Hai <haijie1@huawei.com> <...> > @@ -635,8 +642,8 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) > rte_atomic_store_explicit(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, > rte_memory_order_release); > > - (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, > - NULL, 0); > + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); > + (void)hns3vf_mbx_send(hw, &req, false, NULL, 0); > 'req' is not declared in this function scope but used. ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 6/6] net/hns3: refactor handle mailbox function 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai ` (4 preceding siblings ...) 2023-11-10 6:13 ` [PATCH v2 5/6] net/hns3: refactor send mailbox function Jie Hai @ 2023-11-10 6:13 ` Jie Hai 2023-11-10 16:12 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Ferruh Yigit 6 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-11-10 6:13 UTC (permalink / raw) To: dev, ferruh.yigit, Yisen Zhuang, Huisong Li, Chunsong Feng, Min Hu (Connor), Hao Chen, Ferruh Yigit, Wei Hu (Xavier), Hongbo Zheng Cc: fengchengwen, liudongdong3 From: Dengdui Huang <huangdengdui@huawei.com> The mailbox messages of the PF and VF are processed in the same function. The PF and VF call the same function to process the messages. This code is excessive coupling and isn't good for maintenance. Therefore, this patch separates the interfaces that handle PF mailbox message and handle VF mailbox message. Fixes: 463e748964f5 ("net/hns3: support mailbox") Fixes: 109e4dd1bd7a ("net/hns3: get link state change through mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 4 +- drivers/net/hns3/hns3_mbx.c | 70 ++++++++++++++++++++++++------- drivers/net/hns3/hns3_mbx.h | 3 +- 4 files changed, 59 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 4b63308e8fdf..ffeb18516be9 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -382,7 +382,7 @@ hns3_interrupt_handler(void *param) hns3_warn(hw, "received reset interrupt"); hns3_schedule_reset(hns); } else if (event_cause == HNS3_VECTOR0_EVENT_MBX) { - hns3_dev_handle_mbx_msg(hw); + hns3pf_handle_mbx_msg(hw); } else if (event_cause != HNS3_VECTOR0_EVENT_PTP) { hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x " "ras_int_stat:0x%x cmdq_int_stat:0x%x", diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index f4c4c67e977e..dd197aedb7ea 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -605,7 +605,7 @@ hns3vf_interrupt_handler(void *param) hns3_schedule_reset(hns); break; case HNS3VF_VECTOR0_EVENT_MBX: - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); break; default: break; @@ -654,7 +654,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE * mailbox from PF driver to get this capability. */ - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); if (rte_atomic_load_explicit(&vf->pf_push_lsc_cap, rte_memory_order_acquire) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break; diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 278302135804..c897bd39bed5 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -79,7 +79,7 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return -EIO; } - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); if (hw->mbx_resp.received_match_resp) @@ -373,9 +373,58 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) } void -hns3_dev_handle_mbx_msg(struct hns3_hw *hw) +hns3pf_handle_mbx_msg(struct hns3_hw *hw) +{ + struct hns3_cmq_ring *crq = &hw->cmq.crq; + struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_cmd_desc *desc; + uint16_t flag; + + rte_spinlock_lock(&hw->cmq.crq.lock); + + while (!hns3_cmd_crq_empty(hw)) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) { + rte_spinlock_unlock(&hw->cmq.crq.lock); + return; + } + desc = &crq->desc[crq->next_to_use]; + req = (struct hns3_mbx_vf_to_pf_cmd *)desc->data; + + flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); + if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { + hns3_warn(hw, + "dropped invalid mailbox message, code = %u", + req->msg.code); + + /* dropping/not processing this invalid message */ + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + continue; + } + + switch (req->msg.code) { + case HNS3_MBX_PUSH_LINK_STATUS: + hns3pf_handle_link_change_event(hw, req); + break; + default: + hns3_err(hw, "received unsupported(%u) mbx msg", + req->msg.code); + break; + } + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + } + + /* Write back CMDQ_RQ header pointer, IMP need this pointer */ + hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use); + + rte_spinlock_unlock(&hw->cmq.crq.lock); +} + +void +hns3vf_handle_mbx_msg(struct hns3_hw *hw) { - struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_cmq_ring *crq = &hw->cmq.crq; struct hns3_mbx_pf_to_vf_cmd *req; struct hns3_cmd_desc *desc; @@ -386,7 +435,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) rte_spinlock_lock(&hw->cmq.crq.lock); handle_out = (rte_eal_process_type() != RTE_PROC_PRIMARY || - !rte_thread_is_intr()) && hns->is_vf; + !rte_thread_is_intr()); if (handle_out) { /* * Currently, any threads in the primary and secondary processes @@ -432,8 +481,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) continue; } - handle_out = hns->is_vf && desc->opcode == 0; - if (handle_out) { + if (desc->opcode == 0) { /* Message already processed by other thread */ crq->desc[crq->next_to_use].flag = 0; hns3_mbx_ring_ptr_move_crq(crq); @@ -450,16 +498,6 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) case HNS3_MBX_ASSERTING_RESET: hns3_handle_asserting_reset(hw, req); break; - case HNS3_MBX_PUSH_LINK_STATUS: - /* - * This message is reported by the firmware and is - * reported in 'struct hns3_mbx_vf_to_pf_cmd' format. - * Therefore, we should cast the req variable to - * 'struct hns3_mbx_vf_to_pf_cmd' and then process it. - */ - hns3pf_handle_link_change_event(hw, - (struct hns3_mbx_vf_to_pf_cmd *)req); - break; case HNS3_MBX_PUSH_VLAN_INFO: /* * When the PVID configuration status of VF device is diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 360e91c30eb9..967d9df3bcac 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -209,7 +209,8 @@ struct hns3_pf_rst_done_cmd { ((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num) struct hns3_hw; -void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); +void hns3pf_handle_mbx_msg(struct hns3_hw *hw); +void hns3vf_handle_mbx_msg(struct hns3_hw *hw); void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode); int hns3vf_mbx_send(struct hns3_hw *hw, -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v2 0/6] net/hns3: fix and refactor some codes 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai ` (5 preceding siblings ...) 2023-11-10 6:13 ` [PATCH v2 6/6] net/hns3: refactor handle " Jie Hai @ 2023-11-10 16:12 ` Ferruh Yigit 6 siblings, 0 replies; 33+ messages in thread From: Ferruh Yigit @ 2023-11-10 16:12 UTC (permalink / raw) To: Jie Hai, dev, techboard; +Cc: lihuisong, fengchengwen, liudongdong3 On 11/10/2023 6:13 AM, Jie Hai wrote: > This patchset fixes the failure on sync mailbox and > refactors some codes on mailbox, also replace gcc > builtin __atomic_xxx with rte_atomic_xxx. > > -- > v2: > 1. fix misspelling error in commit log and codes. > 2. replace __atomic_xxx with rte_atomic_xxx. > > -- > > Dengdui Huang (5): > net/hns3: fix sync mailbox failure forever > net/hns3: refactor VF mailbox message struct > net/hns3: refactor PF mailbox message struct > net/hns3: refactor send mailbox function > net/hns3: refactor handle mailbox function > > Jie Hai (1): > net/hns3: use stdatomic API > Hi Jie, I still believe this set is a big update for -rc3, I proceeded based on your request with the ask to test it thoroughly, but new version doesn't even compiles, which doesn't gives confidence about the set. As most of the code is refactoring, lets postpone it to next release. Please send the fix, first patch, as separate patch and only merge that one for this release. ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v3 0/4] net/hns3: refactor mailbox 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai ` (6 preceding siblings ...) 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai @ 2023-12-07 1:37 ` Jie Hai 2023-12-07 1:37 ` [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct Jie Hai ` (4 more replies) 2023-12-08 6:55 ` [PATCH v4 " Jie Hai 8 siblings, 5 replies; 33+ messages in thread From: Jie Hai @ 2023-12-07 1:37 UTC (permalink / raw) To: dev; +Cc: lihuisong, fengchengwen, liudongdong3, haijie1 This patchset refactors mailbox codes. -- v3: 1. fix the checkpatch warning on __atomic_xxx. -- Dengdui Huang (4): net/hns3: refactor VF mailbox message struct net/hns3: refactor PF mailbox message struct net/hns3: refactor send mailbox function net/hns3: refactor handle mailbox function drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev.h | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 181 +++++++++++++++++------------- drivers/net/hns3/hns3_mbx.c | 166 +++++++++++++++------------ drivers/net/hns3/hns3_mbx.h | 94 ++++++++++++---- drivers/net/hns3/hns3_rxtx.c | 18 ++- 6 files changed, 280 insertions(+), 183 deletions(-) -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai @ 2023-12-07 1:37 ` Jie Hai 2023-12-07 12:47 ` Ferruh Yigit 2023-12-07 1:37 ` [PATCH v3 2/4] net/hns3: refactor PF " Jie Hai ` (3 subsequent siblings) 4 siblings, 1 reply; 33+ messages in thread From: Jie Hai @ 2023-12-07 1:37 UTC (permalink / raw) To: dev, Yisen Zhuang, Min Hu (Connor), Huisong Li, Wei Hu (Xavier), Hao Chen, Ferruh Yigit Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The data region in VF to PF mbx message command is used to communicate with PF driver. And this data region exists as an array. As a result, some complicated feature commands, like setting promisc mode, map/unmap ring vector and setting VLAN id, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_vf_to_pf_msg structure. In addition, the PF link change event message is reported by the firmware and is reported in hns3_mbx_vf_to_pf_cmd format, it also needs to be modified. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 54 +++++++++++++--------------- drivers/net/hns3/hns3_mbx.c | 24 ++++++------- drivers/net/hns3/hns3_mbx.h | 58 +++++++++++++++++++++++-------- 3 files changed, 78 insertions(+), 58 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 916cc0fb1b62..4cddf01d6f20 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -254,11 +254,12 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; - req->msg[1] = en_bc_pmc ? 1 : 0; - req->msg[2] = en_uc_pmc ? 1 : 0; - req->msg[3] = en_mc_pmc ? 1 : 0; - req->msg[4] = hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; + req->msg.code = HNS3_MBX_SET_PROMISC_MODE; + req->msg.en_bc = en_bc_pmc ? 1 : 0; + req->msg.en_uc = en_uc_pmc ? 1 : 0; + req->msg.en_mc = en_mc_pmc ? 1 : 0; + req->msg.en_limit_promisc = + hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; ret = hns3_cmd_send(hw, &desc, 1); if (ret) @@ -347,30 +348,28 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { - struct hns3_vf_bind_vector_msg bind_msg; +#define HNS3_RING_VECTOR_DATA_SIZE 14 + struct hns3_vf_to_pf_msg req = {0}; const char *op_str; - uint16_t code; int ret; - memset(&bind_msg, 0, sizeof(bind_msg)); - code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : + req.code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : HNS3_MBX_UNMAP_RING_TO_VECTOR; - bind_msg.vector_id = (uint8_t)vector_id; + req.vector_id = (uint8_t)vector_id; + req.ring_num = 1; if (queue_type == HNS3_RING_TYPE_RX) - bind_msg.param[0].int_gl_index = HNS3_RING_GL_RX; + req.ring_param[0].int_gl_index = HNS3_RING_GL_RX; else - bind_msg.param[0].int_gl_index = HNS3_RING_GL_TX; - - bind_msg.param[0].ring_type = queue_type; - bind_msg.ring_num = 1; - bind_msg.param[0].tqp_index = queue_id; + req.ring_param[0].int_gl_index = HNS3_RING_GL_TX; + req.ring_param[0].ring_type = queue_type; + req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, code, 0, (uint8_t *)&bind_msg, - sizeof(bind_msg), false, NULL, 0); + ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, + HNS3_RING_VECTOR_DATA_SIZE, false, NULL, 0); if (ret) - hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret is %d.", - op_str, queue_id, bind_msg.vector_id, ret); + hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", + op_str, queue_id, req.vector_id, ret); return ret; } @@ -965,19 +964,16 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { -#define HNS3VF_VLAN_MBX_MSG_LEN 5 + struct hns3_mbx_vlan_filter vlan_filter = {0}; struct hns3_hw *hw = &hns->hw; - uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN]; - uint16_t proto = htons(RTE_ETHER_TYPE_VLAN); - uint8_t is_kill = on ? 0 : 1; - msg_data[0] = is_kill; - memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id)); - memcpy(&msg_data[3], &proto, sizeof(proto)); + vlan_filter.is_kill = on ? 0 : 1; + vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - msg_data, HNS3VF_VLAN_MBX_MSG_LEN, true, NULL, - 0); + (uint8_t *)&vlan_filter, sizeof(vlan_filter), + true, NULL, 0); } static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index f1743c195efa..ad5ec555b39e 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -11,8 +11,6 @@ #include "hns3_intr.h" #include "hns3_rxtx.h" -#define HNS3_CMD_CODE_OFFSET 2 - static const struct errno_respcode_map err_code_map[] = { {0, 0}, {1, -EPERM}, @@ -127,29 +125,30 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_mbx_vf_to_pf_cmd *req; struct hns3_cmd_desc desc; bool is_ring_vector_msg; - int offset; int ret; req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; /* first two bytes are reserved for code & subcode */ - if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) { + if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { hns3_err(hw, "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET); + msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); return -EINVAL; } hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = code; + req->msg.code = code; is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || (code == HNS3_MBX_GET_RING_VECTOR_MAP); if (!is_ring_vector_msg) - req->msg[1] = subcode; + req->msg.subcode = subcode; if (msg_data) { - offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET; - memcpy(&req->msg[offset], msg_data, msg_len); + if (is_ring_vector_msg) + memcpy(&req->msg.vector_id, msg_data, msg_len); + else + memcpy(&req->msg.data, msg_data, msg_len); } /* synchronous send */ @@ -296,11 +295,8 @@ static void hns3pf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_vf_to_pf_cmd *req) { -#define LINK_STATUS_OFFSET 1 -#define LINK_FAIL_CODE_OFFSET 2 - - if (!req->msg[LINK_STATUS_OFFSET]) - hns3_link_fail_parse(hw, req->msg[LINK_FAIL_CODE_OFFSET]); + if (!req->msg.link_status) + hns3_link_fail_parse(hw, req->msg.link_fail_code); hns3_update_linkstatus_and_event(hw, true); } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 4a328802b920..3f623ba64ca4 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -89,7 +89,6 @@ enum hns3_mbx_link_fail_subcode { HNS3_MBX_LF_XSFP_ABSENT, }; -#define HNS3_MBX_MAX_MSG_SIZE 16 #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 @@ -107,6 +106,48 @@ struct hns3_mbx_resp_status { uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE]; }; +struct hns3_ring_chain_param { + uint8_t ring_type; + uint8_t tqp_index; + uint8_t int_gl_index; +}; + +#pragma pack(1) +struct hns3_mbx_vlan_filter { + uint8_t is_kill; + uint16_t vlan_id; + uint16_t proto; +}; +#pragma pack() + +#define HNS3_MBX_MSG_MAX_DATA_SIZE 14 +#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 +struct hns3_vf_to_pf_msg { + uint8_t code; + union { + struct { + uint8_t subcode; + uint8_t data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; + struct { + uint8_t en_bc; + uint8_t en_uc; + uint8_t en_mc; + uint8_t en_limit_promisc; + }; + struct { + uint8_t vector_id; + uint8_t ring_num; + struct hns3_ring_chain_param + ring_param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; + }; + struct { + uint8_t link_status; + uint8_t link_fail_code; + }; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -122,7 +163,7 @@ struct hns3_mbx_vf_to_pf_cmd { uint8_t msg_len; uint8_t rsv2; uint16_t match_id; - uint8_t msg[HNS3_MBX_MAX_MSG_SIZE]; + struct hns3_vf_to_pf_msg msg; }; struct hns3_mbx_pf_to_vf_cmd { @@ -134,19 +175,6 @@ struct hns3_mbx_pf_to_vf_cmd { uint16_t msg[8]; }; -struct hns3_ring_chain_param { - uint8_t ring_type; - uint8_t tqp_index; - uint8_t int_gl_index; -}; - -#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 -struct hns3_vf_bind_vector_msg { - uint8_t vector_id; - uint8_t ring_num; - struct hns3_ring_chain_param param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; -}; - struct hns3_pf_rst_done_cmd { uint8_t pf_rst_done; uint8_t rsv[23]; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct 2023-12-07 1:37 ` [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct Jie Hai @ 2023-12-07 12:47 ` Ferruh Yigit 0 siblings, 0 replies; 33+ messages in thread From: Ferruh Yigit @ 2023-12-07 12:47 UTC (permalink / raw) To: Jie Hai, dev, Yisen Zhuang, Min Hu (Connor), Huisong Li, Wei Hu (Xavier), Hao Chen, Tyler Retzlaff Cc: fengchengwen, liudongdong3, David Marchand, Thomas Monjalon On 12/7/2023 1:37 AM, Jie Hai wrote: > From: Dengdui Huang <huangdengdui@huawei.com> > > The data region in VF to PF mbx message command is > used to communicate with PF driver. And this data > region exists as an array. As a result, some complicated > feature commands, like setting promisc mode, map/unmap > ring vector and setting VLAN id, have to use magic number > to set them. This isn't good for maintenance of driver. > So this patch refactors these messages by extracting an > hns3_vf_to_pf_msg structure. > > In addition, the PF link change event message is reported > by the firmware and is reported in hns3_mbx_vf_to_pf_cmd > format, it also needs to be modified. > > Fixes: 463e748964f5 ("net/hns3: support mailbox") > Cc: stable@dpdk.org > > Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> > Signed-off-by: Jie Hai <haijie1@huawei.com> <...> > @@ -107,6 +106,48 @@ struct hns3_mbx_resp_status { > uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE]; > }; > > +struct hns3_ring_chain_param { > + uint8_t ring_type; > + uint8_t tqp_index; > + uint8_t int_gl_index; > +}; > + > +#pragma pack(1) > +struct hns3_mbx_vlan_filter { > + uint8_t is_kill; > + uint16_t vlan_id; > + uint16_t proto; > +}; > +#pragma pack() > + > Please prefer '__rte_packed' instead of "#pragma pack()", as it is more consisted way for same purpose. But I see multiple instances of "#pragma pack()" already exists. @Tyler, as for as I understand '__attribute__((__packed__))' (__rte_packed) is GCC way and "#pragma pack()" is Windows way. Is __rte_packed causing problem with latest windows compilers? And should we have an abstract __rte_packed that works for both windows compiler and gcc? ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v3 2/4] net/hns3: refactor PF mailbox message struct 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai 2023-12-07 1:37 ` [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct Jie Hai @ 2023-12-07 1:37 ` Jie Hai 2023-12-07 1:37 ` [PATCH v3 3/4] net/hns3: refactor send mailbox function Jie Hai ` (2 subsequent siblings) 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-07 1:37 UTC (permalink / raw) To: dev, Yisen Zhuang, Chunsong Feng, Huisong Li, Hao Chen, Min Hu (Connor), Ferruh Yigit Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The data region in PF to VF mbx message command is used to communicate with VF driver. And this data region exists as an array. As a result, some complicated feature commands, like mailbox response, link change event, close promisc mode, reset request and update pvid state, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_pf_to_vf_msg structure. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_mbx.c | 38 ++++++++++++++++++------------------- drivers/net/hns3/hns3_mbx.h | 25 +++++++++++++++++++++++- 2 files changed, 43 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index ad5ec555b39e..c90f5d59ba21 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -192,17 +192,17 @@ static void hns3vf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { + struct hns3_mbx_link_status *link_info = + (struct hns3_mbx_link_status *)req->msg.msg_data; uint8_t link_status, link_duplex; - uint16_t *msg_q = req->msg; uint8_t support_push_lsc; uint32_t link_speed; - memcpy(&link_speed, &msg_q[2], sizeof(link_speed)); - link_status = rte_le_to_cpu_16(msg_q[1]); - link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]); - hns3vf_update_link_status(hw, link_status, link_speed, - link_duplex); - support_push_lsc = (*(uint8_t *)&msg_q[5]) & 1u; + link_status = (uint8_t)rte_le_to_cpu_16(link_info->link_status); + link_speed = rte_le_to_cpu_32(link_info->speed); + link_duplex = (uint8_t)rte_le_to_cpu_16(link_info->duplex); + hns3vf_update_link_status(hw, link_status, link_speed, link_duplex); + support_push_lsc = (link_info->flag) & 1u; hns3vf_update_push_lsc_cap(hw, support_push_lsc); } @@ -211,7 +211,6 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { enum hns3_reset_level reset_level; - uint16_t *msg_q = req->msg; /* * PF has asserted reset hence VF should go in pending @@ -219,7 +218,7 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, * has been completely reset. After this stack should * eventually be re-initialized. */ - reset_level = rte_le_to_cpu_16(msg_q[1]); + reset_level = rte_le_to_cpu_16(req->msg.reset_level); hns3_atomic_set_bit(reset_level, &hw->reset.pending); hns3_warn(hw, "PF inform reset level %d", reset_level); @@ -241,8 +240,9 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * to match the request. */ if (req->match_id == resp->match_id) { - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = + hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -255,7 +255,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ - msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + msg_data = (uint32_t)req->msg.vf_mbx_msg_code << + HNS3_MBX_RESP_CODE_OFFSET | req->msg.vf_mbx_msg_subcode; if (resp->req_msg_data != msg_data) { hns3_warn(hw, "received response tag (%u) is mismatched with requested tag (%u)", @@ -263,8 +264,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) return; } - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -305,8 +306,7 @@ static void hns3_update_port_base_vlan_info(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { -#define PVID_STATE_OFFSET 1 - uint16_t new_pvid_state = req->msg[PVID_STATE_OFFSET] ? + uint16_t new_pvid_state = req->msg.pvid_state ? HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE; /* * Currently, hardware doesn't support more than two layers VLAN offload @@ -355,7 +355,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) while (next_to_use != tail) { desc = &crq->desc[next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[next_to_use].flag); if (!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B)) @@ -428,7 +428,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) desc = &crq->desc[crq->next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { @@ -484,7 +484,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) * hns3 PF kernel driver, VF driver will receive this * mailbox message from PF driver. */ - hns3_handle_promisc_info(hw, req->msg[1]); + hns3_handle_promisc_info(hw, req->msg.promisc_en); break; default: hns3_err(hw, "received unsupported(%u) mbx msg", diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 3f623ba64ca4..64f30d2923ea 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -118,6 +118,13 @@ struct hns3_mbx_vlan_filter { uint16_t vlan_id; uint16_t proto; }; + +struct hns3_mbx_link_status { + uint16_t link_status; + uint32_t speed; + uint16_t duplex; + uint8_t flag; +}; #pragma pack() #define HNS3_MBX_MSG_MAX_DATA_SIZE 14 @@ -148,6 +155,22 @@ struct hns3_vf_to_pf_msg { }; }; +struct hns3_pf_to_vf_msg { + uint16_t code; + union { + struct { + uint16_t vf_mbx_msg_code; + uint16_t vf_mbx_msg_subcode; + uint16_t resp_status; + uint8_t resp_data[HNS3_MBX_MAX_RESP_DATA_SIZE]; + }; + uint16_t promisc_en; + uint16_t reset_level; + uint16_t pvid_state; + uint8_t msg_data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -172,7 +195,7 @@ struct hns3_mbx_pf_to_vf_cmd { uint8_t msg_len; uint8_t rsv1; uint16_t match_id; - uint16_t msg[8]; + struct hns3_pf_to_vf_msg msg; }; struct hns3_pf_rst_done_cmd { -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v3 3/4] net/hns3: refactor send mailbox function 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai 2023-12-07 1:37 ` [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct Jie Hai 2023-12-07 1:37 ` [PATCH v3 2/4] net/hns3: refactor PF " Jie Hai @ 2023-12-07 1:37 ` Jie Hai 2023-12-07 1:37 ` [PATCH v3 4/4] net/hns3: refactor handle " Jie Hai 2023-12-07 12:31 ` [PATCH v3 0/4] net/hns3: refactor mailbox Ferruh Yigit 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-07 1:37 UTC (permalink / raw) To: dev, Yisen Zhuang, Min Hu (Connor), Wei Hu (Xavier), Chunsong Feng, Hao Chen, Ferruh Yigit Cc: lihuisong, fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The 'hns3_send_mbx_msg' function has following problem: 1. the name is vague, missing caller indication. 2. too many input parameters because the filling messages are placed in commands the send command. Therefore, a common interface is encapsulated to fill in the mailbox message before sending it. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 141 ++++++++++++++++++------------ drivers/net/hns3/hns3_mbx.c | 50 ++++------- drivers/net/hns3/hns3_mbx.h | 8 +- drivers/net/hns3/hns3_rxtx.c | 18 ++-- 4 files changed, 116 insertions(+), 101 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 4cddf01d6f20..b0d0c29df191 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -91,11 +91,13 @@ hns3vf_add_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes, - RTE_ETHER_ADDR_LEN, false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -110,12 +112,13 @@ hns3vf_remove_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -134,6 +137,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *old_addr; uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; /* @@ -146,9 +150,10 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes, RTE_ETHER_ADDR_LEN); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes, - HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_MODIFY); + memcpy(req.data, addr_bytes, HNS3_TWO_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) { /* * The hns3 VF PMD depends on the hns3 PF kernel ethdev @@ -185,12 +190,13 @@ hns3vf_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_ADD, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -206,12 +212,13 @@ hns3vf_remove_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -348,7 +355,6 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { -#define HNS3_RING_VECTOR_DATA_SIZE 14 struct hns3_vf_to_pf_msg req = {0}; const char *op_str; int ret; @@ -365,8 +371,7 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, req.ring_param[0].ring_type = queue_type; req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, - HNS3_RING_VECTOR_DATA_SIZE, false, NULL, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", op_str, queue_id, req.vector_id, ret); @@ -452,10 +457,12 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) static int hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu) { + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu, - sizeof(mtu), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MTU, 0); + memcpy(req.data, &mtu, sizeof(mtu)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret); @@ -646,12 +653,13 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) uint16_t val = HNS3_PF_PUSH_LSC_CAP_NOT_SUPPORTED; uint16_t exp = HNS3_PF_PUSH_LSC_CAP_UNKNOWN; struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; __atomic_store_n(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, __ATOMIC_RELEASE); - (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + (void)hns3vf_mbx_send(hw, &req, false, NULL, 0); while (remain_ms > 0) { rte_delay_ms(HNS3_POLL_RESPONE_MS); @@ -746,12 +754,13 @@ hns3vf_check_tqp_info(struct hns3_hw *hw) static int hns3vf_get_port_base_vlan_filter_state(struct hns3_hw *hw) { + struct hns3_vf_to_pf_msg req; uint8_t resp_msg; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_GET_PORT_BASE_VLAN_STATE, NULL, 0, - true, &resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_GET_PORT_BASE_VLAN_STATE); + ret = hns3vf_mbx_send(hw, &req, true, &resp_msg, sizeof(resp_msg)); if (ret) { if (ret == -ETIME) { /* @@ -792,10 +801,12 @@ hns3vf_get_queue_info(struct hns3_hw *hw) { #define HNS3VF_TQPS_RSS_INFO_LEN 6 uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true, - resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_QINFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, + resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); if (ret) { PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret); return ret; @@ -833,10 +844,11 @@ hns3vf_get_basic_info(struct hns3_hw *hw) { uint8_t resp_msg[HNS3_MBX_MAX_RESP_DATA_SIZE]; struct hns3_basic_info *basic_info; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_BASIC_INFO, 0, NULL, 0, - true, resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_BASIC_INFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, resp_msg, sizeof(resp_msg)); if (ret) { hns3_err(hw, "failed to get basic info from PF, ret = %d.", ret); @@ -856,10 +868,11 @@ static int hns3vf_get_host_mac_addr(struct hns3_hw *hw) { uint8_t host_mac[RTE_ETHER_ADDR_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0, - true, host_mac, RTE_ETHER_ADDR_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_MAC_ADDR, 0); + ret = hns3vf_mbx_send(hw, &req, true, host_mac, RTE_ETHER_ADDR_LEN); if (ret) { hns3_err(hw, "Failed to get mac addr from PF: %d", ret); return ret; @@ -908,6 +921,7 @@ static void hns3vf_request_link_info(struct hns3_hw *hw) { struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; bool send_req; int ret; @@ -919,8 +933,8 @@ hns3vf_request_link_info(struct hns3_hw *hw) if (!send_req) return; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_err(hw, "failed to fetch link status, ret = %d", ret); return; @@ -964,16 +978,18 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { - struct hns3_mbx_vlan_filter vlan_filter = {0}; + struct hns3_mbx_vlan_filter *vlan_filter; + struct hns3_vf_to_pf_msg req = {0}; struct hns3_hw *hw = &hns->hw; - vlan_filter.is_kill = on ? 0 : 1; - vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); - vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); + req.code = HNS3_MBX_SET_VLAN; + req.subcode = HNS3_MBX_VLAN_FILTER; + vlan_filter = (struct hns3_mbx_vlan_filter *)req.data; + vlan_filter->is_kill = on ? 0 : 1; + vlan_filter->proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter->vlan_id = rte_cpu_to_le_16(vlan_id); - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - (uint8_t *)&vlan_filter, sizeof(vlan_filter), - true, NULL, 0); + return hns3vf_mbx_send(hw, &req, true, NULL, 0); } static int @@ -1002,6 +1018,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) static int hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; @@ -1009,9 +1026,10 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) return 0; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_ENABLE_VLAN_FILTER, &msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_ENABLE_VLAN_FILTER); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "%s vlan filter failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1022,12 +1040,15 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) static int hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG, - &msg_data, sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_VLAN_RX_OFF_CFG); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "vf %s strip failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1171,11 +1192,13 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev) static int hns3vf_set_alive(struct hns3_hw *hw, bool alive) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; msg_data = alive ? 1 : 0; - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data, - sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_ALIVE, 0); + memcpy(req.data, &msg_data, sizeof(msg_data)); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static void @@ -1183,11 +1206,12 @@ hns3vf_keep_alive_handler(void *param) { struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; struct hns3_adapter *hns = eth_dev->data->dev_private; + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_KEEP_ALIVE, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "VF sends keeping alive cmd failed(=%d)", ret); @@ -1326,9 +1350,11 @@ hns3vf_init_hardware(struct hns3_adapter *hns) static int hns3vf_clear_vport_list(struct hns3_hw *hw) { - return hns3_send_mbx_msg(hw, HNS3_MBX_HANDLE_VF_TBL, - HNS3_MBX_VPORT_LIST_CLEAR, NULL, 0, false, - NULL, 0); + struct hns3_vf_to_pf_msg req; + + hns3vf_mbx_setup(&req, HNS3_MBX_HANDLE_VF_TBL, + HNS3_MBX_VPORT_LIST_CLEAR); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static int @@ -1797,12 +1823,13 @@ hns3vf_wait_hardware_ready(struct hns3_adapter *hns) static int hns3vf_prepare_reset(struct hns3_adapter *hns) { + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; if (hw->reset.level == HNS3_VF_FUNC_RESET) { - ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL, - 0, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) return ret; } diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index c90f5d59ba21..43195ff184b1 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -24,6 +24,14 @@ static const struct errno_respcode_map err_code_map[] = { {95, -EOPNOTSUPP}, }; +void +hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode) +{ + memset(req, 0, sizeof(struct hns3_vf_to_pf_msg)); + req->code = code; + req->subcode = subcode; +} + static int hns3_resp_to_errno(uint16_t resp_code) { @@ -118,45 +126,24 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) } int -hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len) +hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req, bool need_resp, + uint8_t *resp_data, uint16_t resp_len) { - struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_mbx_vf_to_pf_cmd *cmd; struct hns3_cmd_desc desc; - bool is_ring_vector_msg; int ret; - req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; - - /* first two bytes are reserved for code & subcode */ - if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { - hns3_err(hw, - "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); - return -EINVAL; - } - hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg.code = code; - is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || - (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || - (code == HNS3_MBX_GET_RING_VECTOR_MAP); - if (!is_ring_vector_msg) - req->msg.subcode = subcode; - if (msg_data) { - if (is_ring_vector_msg) - memcpy(&req->msg.vector_id, msg_data, msg_len); - else - memcpy(&req->msg.data, msg_data, msg_len); - } + cmd = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; + cmd->msg = *req; /* synchronous send */ if (need_resp) { - req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; + cmd->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; rte_spinlock_lock(&hw->mbx_resp.lock); - hns3_mbx_prepare_resp(hw, code, subcode); - req->match_id = hw->mbx_resp.match_id; + hns3_mbx_prepare_resp(hw, req->code, req->subcode); + cmd->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { rte_spinlock_unlock(&hw->mbx_resp.lock); @@ -165,7 +152,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return ret; } - ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len); + ret = hns3_get_mbx_resp(hw, req->code, req->subcode, + resp_data, resp_len); rte_spinlock_unlock(&hw->mbx_resp.lock); } else { /* asynchronous send */ diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 64f30d2923ea..360e91c30eb9 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -210,7 +210,9 @@ struct hns3_pf_rst_done_cmd { struct hns3_hw; void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); -int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len); +void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, + uint8_t code, uint8_t subcode); +int hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req_msg, bool need_resp, + uint8_t *resp_data, uint16_t resp_len); #endif /* HNS3_MBX_H */ diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 09b7e90c7000..9087bcffed9b 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -686,13 +686,12 @@ hns3pf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) static int hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) { - uint8_t msg_data[2]; + struct hns3_vf_to_pf_msg req; int ret; - memcpy(msg_data, &queue_id, sizeof(uint16_t)); - - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + memcpy(req.data, &queue_id, sizeof(uint16_t)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "fail to reset tqp, queue_id = %u, ret = %d.", queue_id, ret); @@ -769,15 +768,14 @@ static int hns3vf_reset_all_tqps(struct hns3_hw *hw) { #define HNS3VF_RESET_ALL_TQP_DONE 1U + struct hns3_vf_to_pf_msg req; uint8_t reset_status; - uint8_t msg_data[2]; int ret; uint16_t i; - memset(msg_data, 0, sizeof(msg_data)); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, &reset_status, - sizeof(reset_status)); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, + &reset_status, sizeof(reset_status)); if (ret) { hns3_err(hw, "fail to send rcb reset mbx, ret = %d.", ret); return ret; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v3 4/4] net/hns3: refactor handle mailbox function 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai ` (2 preceding siblings ...) 2023-12-07 1:37 ` [PATCH v3 3/4] net/hns3: refactor send mailbox function Jie Hai @ 2023-12-07 1:37 ` Jie Hai 2023-12-07 12:31 ` [PATCH v3 0/4] net/hns3: refactor mailbox Ferruh Yigit 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-07 1:37 UTC (permalink / raw) To: dev, Yisen Zhuang, Hao Chen, Ferruh Yigit, Min Hu (Connor), Huisong Li, Wei Hu (Xavier), Hongbo Zheng Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The mailbox messages of the PF and VF are processed in the same function. The PF and VF call the same function to process the messages. This code is excessive coupling and isn't good for maintenance. Therefore, this patch separates the interfaces that handle PF mailbox message and handle VF mailbox message. Fixes: 463e748964f5 ("net/hns3: support mailbox") Fixes: 109e4dd1bd7a ("net/hns3: get link state change through mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev.h | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 4 +- drivers/net/hns3/hns3_mbx.c | 70 ++++++++++++++++++++++++------- drivers/net/hns3/hns3_mbx.h | 3 +- 5 files changed, 60 insertions(+), 21 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index ae81368f68ae..bccd9db0dd4d 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -380,7 +380,7 @@ hns3_interrupt_handler(void *param) hns3_warn(hw, "received reset interrupt"); hns3_schedule_reset(hns); } else if (event_cause == HNS3_VECTOR0_EVENT_MBX) { - hns3_dev_handle_mbx_msg(hw); + hns3pf_handle_mbx_msg(hw); } else if (event_cause != HNS3_VECTOR0_EVENT_PTP) { hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x " "ras_int_stat:0x%x cmdq_int_stat:0x%x", diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index 12d8299def39..f6c705472d0e 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -403,7 +403,7 @@ struct hns3_reset_data { /* Reset flag, covering the entire reset process */ uint16_t resetting; /* Used to disable sending cmds during reset */ - uint16_t disable_cmd; + RTE_ATOMIC(uint16_t) disable_cmd; /* The reset level being processed */ enum hns3_reset_level level; /* Reset level set, each bit represents a reset level */ diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index b0d0c29df191..f5a7a2b1f46c 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -618,7 +618,7 @@ hns3vf_interrupt_handler(void *param) hns3_schedule_reset(hns); break; case HNS3VF_VECTOR0_EVENT_MBX: - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); break; default: break; @@ -670,7 +670,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE * mailbox from PF driver to get this capability. */ - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break; diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 43195ff184b1..a8cbb153dc62 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -78,7 +78,7 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return -EIO; } - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); if (hw->mbx_resp.received_match_resp) @@ -372,9 +372,58 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) } void -hns3_dev_handle_mbx_msg(struct hns3_hw *hw) +hns3pf_handle_mbx_msg(struct hns3_hw *hw) +{ + struct hns3_cmq_ring *crq = &hw->cmq.crq; + struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_cmd_desc *desc; + uint16_t flag; + + rte_spinlock_lock(&hw->cmq.crq.lock); + + while (!hns3_cmd_crq_empty(hw)) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) { + rte_spinlock_unlock(&hw->cmq.crq.lock); + return; + } + desc = &crq->desc[crq->next_to_use]; + req = (struct hns3_mbx_vf_to_pf_cmd *)desc->data; + + flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); + if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { + hns3_warn(hw, + "dropped invalid mailbox message, code = %u", + req->msg.code); + + /* dropping/not processing this invalid message */ + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + continue; + } + + switch (req->msg.code) { + case HNS3_MBX_PUSH_LINK_STATUS: + hns3pf_handle_link_change_event(hw, req); + break; + default: + hns3_err(hw, "received unsupported(%u) mbx msg", + req->msg.code); + break; + } + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + } + + /* Write back CMDQ_RQ header pointer, IMP need this pointer */ + hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use); + + rte_spinlock_unlock(&hw->cmq.crq.lock); +} + +void +hns3vf_handle_mbx_msg(struct hns3_hw *hw) { - struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_cmq_ring *crq = &hw->cmq.crq; struct hns3_mbx_pf_to_vf_cmd *req; struct hns3_cmd_desc *desc; @@ -385,7 +434,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) rte_spinlock_lock(&hw->cmq.crq.lock); handle_out = (rte_eal_process_type() != RTE_PROC_PRIMARY || - !rte_thread_is_intr()) && hns->is_vf; + !rte_thread_is_intr()); if (handle_out) { /* * Currently, any threads in the primary and secondary processes @@ -430,8 +479,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) continue; } - handle_out = hns->is_vf && desc->opcode == 0; - if (handle_out) { + if (desc->opcode == 0) { /* Message already processed by other thread */ crq->desc[crq->next_to_use].flag = 0; hns3_mbx_ring_ptr_move_crq(crq); @@ -448,16 +496,6 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) case HNS3_MBX_ASSERTING_RESET: hns3_handle_asserting_reset(hw, req); break; - case HNS3_MBX_PUSH_LINK_STATUS: - /* - * This message is reported by the firmware and is - * reported in 'struct hns3_mbx_vf_to_pf_cmd' format. - * Therefore, we should cast the req variable to - * 'struct hns3_mbx_vf_to_pf_cmd' and then process it. - */ - hns3pf_handle_link_change_event(hw, - (struct hns3_mbx_vf_to_pf_cmd *)req); - break; case HNS3_MBX_PUSH_VLAN_INFO: /* * When the PVID configuration status of VF device is diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 360e91c30eb9..967d9df3bcac 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -209,7 +209,8 @@ struct hns3_pf_rst_done_cmd { ((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num) struct hns3_hw; -void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); +void hns3pf_handle_mbx_msg(struct hns3_hw *hw); +void hns3vf_handle_mbx_msg(struct hns3_hw *hw); void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode); int hns3vf_mbx_send(struct hns3_hw *hw, -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v3 0/4] net/hns3: refactor mailbox 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai ` (3 preceding siblings ...) 2023-12-07 1:37 ` [PATCH v3 4/4] net/hns3: refactor handle " Jie Hai @ 2023-12-07 12:31 ` Ferruh Yigit 2023-12-08 1:06 ` Jie Hai 4 siblings, 1 reply; 33+ messages in thread From: Ferruh Yigit @ 2023-12-07 12:31 UTC (permalink / raw) To: Jie Hai, Dengdui Huang; +Cc: lihuisong, fengchengwen, liudongdong3, dev On 12/7/2023 1:37 AM, Jie Hai wrote: > This patchset refactors mailbox codes. > > -- > v3: > 1. fix the checkpatch warning on __atomic_xxx. > -- > > Dengdui Huang (4): > net/hns3: refactor VF mailbox message struct > net/hns3: refactor PF mailbox message struct > net/hns3: refactor send mailbox function > net/hns3: refactor handle mailbox function > > Hi Jie, Dengdui, Set fails to build with clang and stdatomic [1], it can be reproduced with command [2]. Can you please check? [1] https://mails.dpdk.org/archives/test-report/2023-December/525771.html [2] CC=clang meson setup --buildtype=debugoptimized --werror -Denable_stdatomic=true build && ninja -C build ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v3 0/4] net/hns3: refactor mailbox 2023-12-07 12:31 ` [PATCH v3 0/4] net/hns3: refactor mailbox Ferruh Yigit @ 2023-12-08 1:06 ` Jie Hai 0 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-08 1:06 UTC (permalink / raw) To: Ferruh Yigit, Dengdui Huang; +Cc: lihuisong, fengchengwen, liudongdong3, dev On 2023/12/7 20:31, Ferruh Yigit wrote: > On 12/7/2023 1:37 AM, Jie Hai wrote: >> This patchset refactors mailbox codes. >> >> -- >> v3: >> 1. fix the checkpatch warning on __atomic_xxx. >> -- >> >> Dengdui Huang (4): >> net/hns3: refactor VF mailbox message struct >> net/hns3: refactor PF mailbox message struct >> net/hns3: refactor send mailbox function >> net/hns3: refactor handle mailbox function >> >> > > Hi Jie, Dengdui, > > Set fails to build with clang and stdatomic [1], it can be reproduced > with command [2]. Can you please check? > > [1] > https://mails.dpdk.org/archives/test-report/2023-December/525771.html > > [2] > CC=clang meson setup --buildtype=debugoptimized --werror > -Denable_stdatomic=true build && ninja -C build > . Hi, Ferruh, Thanks, I'will check and fix it. Best regards, Jie Hai ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v4 0/4] net/hns3: refactor mailbox 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai ` (7 preceding siblings ...) 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai @ 2023-12-08 6:55 ` Jie Hai 2023-12-08 6:55 ` [PATCH v4 1/4] net/hns3: refactor VF mailbox message struct Jie Hai ` (4 more replies) 8 siblings, 5 replies; 33+ messages in thread From: Jie Hai @ 2023-12-08 6:55 UTC (permalink / raw) To: dev; +Cc: lihuisong, fengchengwen, liudongdong3, haijie1 This patchset refactors mailbox codes. We will send a patch fix for all __atomic_xxx, so this patchset still use __atomic_xxx. -- v4: 1. use __atomic_xxx instead of rte_atomic_XXX. 2. use '__rte_packed' instead of '#pragma pack()'. v3: 1. fix the checkpatch warning on __atomic_xxx. -- Dengdui Huang (4): net/hns3: refactor VF mailbox message struct net/hns3: refactor PF mailbox message struct net/hns3: refactor send mailbox function net/hns3: refactor handle mailbox function drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 181 +++++++++++++++++------------- drivers/net/hns3/hns3_mbx.c | 165 +++++++++++++++------------ drivers/net/hns3/hns3_mbx.h | 92 +++++++++++---- drivers/net/hns3/hns3_rxtx.c | 18 ++- 5 files changed, 276 insertions(+), 182 deletions(-) -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v4 1/4] net/hns3: refactor VF mailbox message struct 2023-12-08 6:55 ` [PATCH v4 " Jie Hai @ 2023-12-08 6:55 ` Jie Hai 2023-12-08 6:55 ` [PATCH v4 2/4] net/hns3: refactor PF " Jie Hai ` (3 subsequent siblings) 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-08 6:55 UTC (permalink / raw) To: dev, Yisen Zhuang, Hao Chen, Chunsong Feng, Huisong Li, Min Hu (Connor), Ferruh Yigit Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The data region in VF to PF mbx memssage command is used to communicate with PF driver. And this data region exists as an array. As a result, some complicated feature commands, like setting promisc mode, map/unmap ring vector and setting VLAN id, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_vf_to_pf_msg structure. In addition, the PF link change event message is reported by the firmware and is reported in hns3_mbx_vf_to_pf_cmd format, it also needs to be modified. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 54 ++++++++++++++--------------- drivers/net/hns3/hns3_mbx.c | 24 ++++++------- drivers/net/hns3/hns3_mbx.h | 56 ++++++++++++++++++++++--------- 3 files changed, 76 insertions(+), 58 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 916cc0fb1b62..19e734ca8d8e 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -254,11 +254,12 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; - req->msg[1] = en_bc_pmc ? 1 : 0; - req->msg[2] = en_uc_pmc ? 1 : 0; - req->msg[3] = en_mc_pmc ? 1 : 0; - req->msg[4] = hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; + req->msg.code = HNS3_MBX_SET_PROMISC_MODE; + req->msg.en_bc = en_bc_pmc ? 1 : 0; + req->msg.en_uc = en_uc_pmc ? 1 : 0; + req->msg.en_mc = en_mc_pmc ? 1 : 0; + req->msg.en_limit_promisc = + hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; ret = hns3_cmd_send(hw, &desc, 1); if (ret) @@ -347,30 +348,28 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { - struct hns3_vf_bind_vector_msg bind_msg; +#define HNS3_RING_VERCTOR_DATA_SIZE 14 + struct hns3_vf_to_pf_msg req = {0}; const char *op_str; - uint16_t code; int ret; - memset(&bind_msg, 0, sizeof(bind_msg)); - code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : + req.code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : HNS3_MBX_UNMAP_RING_TO_VECTOR; - bind_msg.vector_id = (uint8_t)vector_id; + req.vector_id = (uint8_t)vector_id; + req.ring_num = 1; if (queue_type == HNS3_RING_TYPE_RX) - bind_msg.param[0].int_gl_index = HNS3_RING_GL_RX; + req.ring_param[0].int_gl_index = HNS3_RING_GL_RX; else - bind_msg.param[0].int_gl_index = HNS3_RING_GL_TX; - - bind_msg.param[0].ring_type = queue_type; - bind_msg.ring_num = 1; - bind_msg.param[0].tqp_index = queue_id; + req.ring_param[0].int_gl_index = HNS3_RING_GL_TX; + req.ring_param[0].ring_type = queue_type; + req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, code, 0, (uint8_t *)&bind_msg, - sizeof(bind_msg), false, NULL, 0); + ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, + HNS3_RING_VERCTOR_DATA_SIZE, false, NULL, 0); if (ret) - hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret is %d.", - op_str, queue_id, bind_msg.vector_id, ret); + hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", + op_str, queue_id, req.vector_id, ret); return ret; } @@ -965,19 +964,16 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { -#define HNS3VF_VLAN_MBX_MSG_LEN 5 + struct hns3_mbx_vlan_filter vlan_filter = {0}; struct hns3_hw *hw = &hns->hw; - uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN]; - uint16_t proto = htons(RTE_ETHER_TYPE_VLAN); - uint8_t is_kill = on ? 0 : 1; - msg_data[0] = is_kill; - memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id)); - memcpy(&msg_data[3], &proto, sizeof(proto)); + vlan_filter.is_kill = on ? 0 : 1; + vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - msg_data, HNS3VF_VLAN_MBX_MSG_LEN, true, NULL, - 0); + (uint8_t *)&vlan_filter, sizeof(vlan_filter), + true, NULL, 0); } static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index f1743c195efa..ad5ec555b39e 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -11,8 +11,6 @@ #include "hns3_intr.h" #include "hns3_rxtx.h" -#define HNS3_CMD_CODE_OFFSET 2 - static const struct errno_respcode_map err_code_map[] = { {0, 0}, {1, -EPERM}, @@ -127,29 +125,30 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_mbx_vf_to_pf_cmd *req; struct hns3_cmd_desc desc; bool is_ring_vector_msg; - int offset; int ret; req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; /* first two bytes are reserved for code & subcode */ - if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) { + if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { hns3_err(hw, "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET); + msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); return -EINVAL; } hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = code; + req->msg.code = code; is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || (code == HNS3_MBX_GET_RING_VECTOR_MAP); if (!is_ring_vector_msg) - req->msg[1] = subcode; + req->msg.subcode = subcode; if (msg_data) { - offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET; - memcpy(&req->msg[offset], msg_data, msg_len); + if (is_ring_vector_msg) + memcpy(&req->msg.vector_id, msg_data, msg_len); + else + memcpy(&req->msg.data, msg_data, msg_len); } /* synchronous send */ @@ -296,11 +295,8 @@ static void hns3pf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_vf_to_pf_cmd *req) { -#define LINK_STATUS_OFFSET 1 -#define LINK_FAIL_CODE_OFFSET 2 - - if (!req->msg[LINK_STATUS_OFFSET]) - hns3_link_fail_parse(hw, req->msg[LINK_FAIL_CODE_OFFSET]); + if (!req->msg.link_status) + hns3_link_fail_parse(hw, req->msg.link_fail_code); hns3_update_linkstatus_and_event(hw, true); } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 4a328802b920..59fb73abcc6e 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -89,7 +89,6 @@ enum hns3_mbx_link_fail_subcode { HNS3_MBX_LF_XSFP_ABSENT, }; -#define HNS3_MBX_MAX_MSG_SIZE 16 #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 @@ -107,6 +106,46 @@ struct hns3_mbx_resp_status { uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE]; }; +struct hns3_ring_chain_param { + uint8_t ring_type; + uint8_t tqp_index; + uint8_t int_gl_index; +}; + +struct hns3_mbx_vlan_filter { + uint8_t is_kill; + uint16_t vlan_id; + uint16_t proto; +} __rte_packed; + +#define HNS3_MBX_MSG_MAX_DATA_SIZE 14 +#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 +struct hns3_vf_to_pf_msg { + uint8_t code; + union { + struct { + uint8_t subcode; + uint8_t data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; + struct { + uint8_t en_bc; + uint8_t en_uc; + uint8_t en_mc; + uint8_t en_limit_promisc; + }; + struct { + uint8_t vector_id; + uint8_t ring_num; + struct hns3_ring_chain_param + ring_param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; + }; + struct { + uint8_t link_status; + uint8_t link_fail_code; + }; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -122,7 +161,7 @@ struct hns3_mbx_vf_to_pf_cmd { uint8_t msg_len; uint8_t rsv2; uint16_t match_id; - uint8_t msg[HNS3_MBX_MAX_MSG_SIZE]; + struct hns3_vf_to_pf_msg msg; }; struct hns3_mbx_pf_to_vf_cmd { @@ -134,19 +173,6 @@ struct hns3_mbx_pf_to_vf_cmd { uint16_t msg[8]; }; -struct hns3_ring_chain_param { - uint8_t ring_type; - uint8_t tqp_index; - uint8_t int_gl_index; -}; - -#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 -struct hns3_vf_bind_vector_msg { - uint8_t vector_id; - uint8_t ring_num; - struct hns3_ring_chain_param param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; -}; - struct hns3_pf_rst_done_cmd { uint8_t pf_rst_done; uint8_t rsv[23]; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v4 2/4] net/hns3: refactor PF mailbox message struct 2023-12-08 6:55 ` [PATCH v4 " Jie Hai 2023-12-08 6:55 ` [PATCH v4 1/4] net/hns3: refactor VF mailbox message struct Jie Hai @ 2023-12-08 6:55 ` Jie Hai 2023-12-08 6:55 ` [PATCH v4 3/4] net/hns3: refactor send mailbox function Jie Hai ` (2 subsequent siblings) 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-08 6:55 UTC (permalink / raw) To: dev, Yisen Zhuang, Ferruh Yigit, Hao Chen, Min Hu (Connor), Huisong Li, Chunsong Feng Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The data region in PF to VF mbx memssage command is used to communicate with VF driver. And this data region exists as an array. As a result, some complicated feature commands, like mailbox response, link change event, close promisc mode, reset request and update pvid state, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_pf_to_vf_msg structure. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_mbx.c | 38 ++++++++++++++++++------------------- drivers/net/hns3/hns3_mbx.h | 25 +++++++++++++++++++++++- 2 files changed, 43 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index ad5ec555b39e..c90f5d59ba21 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -192,17 +192,17 @@ static void hns3vf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { + struct hns3_mbx_link_status *link_info = + (struct hns3_mbx_link_status *)req->msg.msg_data; uint8_t link_status, link_duplex; - uint16_t *msg_q = req->msg; uint8_t support_push_lsc; uint32_t link_speed; - memcpy(&link_speed, &msg_q[2], sizeof(link_speed)); - link_status = rte_le_to_cpu_16(msg_q[1]); - link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]); - hns3vf_update_link_status(hw, link_status, link_speed, - link_duplex); - support_push_lsc = (*(uint8_t *)&msg_q[5]) & 1u; + link_status = (uint8_t)rte_le_to_cpu_16(link_info->link_status); + link_speed = rte_le_to_cpu_32(link_info->speed); + link_duplex = (uint8_t)rte_le_to_cpu_16(link_info->duplex); + hns3vf_update_link_status(hw, link_status, link_speed, link_duplex); + support_push_lsc = (link_info->flag) & 1u; hns3vf_update_push_lsc_cap(hw, support_push_lsc); } @@ -211,7 +211,6 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { enum hns3_reset_level reset_level; - uint16_t *msg_q = req->msg; /* * PF has asserted reset hence VF should go in pending @@ -219,7 +218,7 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, * has been completely reset. After this stack should * eventually be re-initialized. */ - reset_level = rte_le_to_cpu_16(msg_q[1]); + reset_level = rte_le_to_cpu_16(req->msg.reset_level); hns3_atomic_set_bit(reset_level, &hw->reset.pending); hns3_warn(hw, "PF inform reset level %d", reset_level); @@ -241,8 +240,9 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * to match the request. */ if (req->match_id == resp->match_id) { - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = + hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -255,7 +255,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ - msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + msg_data = (uint32_t)req->msg.vf_mbx_msg_code << + HNS3_MBX_RESP_CODE_OFFSET | req->msg.vf_mbx_msg_subcode; if (resp->req_msg_data != msg_data) { hns3_warn(hw, "received response tag (%u) is mismatched with requested tag (%u)", @@ -263,8 +264,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) return; } - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -305,8 +306,7 @@ static void hns3_update_port_base_vlan_info(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { -#define PVID_STATE_OFFSET 1 - uint16_t new_pvid_state = req->msg[PVID_STATE_OFFSET] ? + uint16_t new_pvid_state = req->msg.pvid_state ? HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE; /* * Currently, hardware doesn't support more than two layers VLAN offload @@ -355,7 +355,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) while (next_to_use != tail) { desc = &crq->desc[next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[next_to_use].flag); if (!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B)) @@ -428,7 +428,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) desc = &crq->desc[crq->next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { @@ -484,7 +484,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) * hns3 PF kernel driver, VF driver will receive this * mailbox message from PF driver. */ - hns3_handle_promisc_info(hw, req->msg[1]); + hns3_handle_promisc_info(hw, req->msg.promisc_en); break; default: hns3_err(hw, "received unsupported(%u) mbx msg", diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 59fb73abcc6e..09780fcebdb2 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -118,6 +118,13 @@ struct hns3_mbx_vlan_filter { uint16_t proto; } __rte_packed; +struct hns3_mbx_link_status { + uint16_t link_status; + uint32_t speed; + uint16_t duplex; + uint8_t flag; +} __rte_packed; + #define HNS3_MBX_MSG_MAX_DATA_SIZE 14 #define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 struct hns3_vf_to_pf_msg { @@ -146,6 +153,22 @@ struct hns3_vf_to_pf_msg { }; }; +struct hns3_pf_to_vf_msg { + uint16_t code; + union { + struct { + uint16_t vf_mbx_msg_code; + uint16_t vf_mbx_msg_subcode; + uint16_t resp_status; + uint8_t resp_data[HNS3_MBX_MAX_RESP_DATA_SIZE]; + }; + uint16_t promisc_en; + uint16_t reset_level; + uint16_t pvid_state; + uint8_t msg_data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -170,7 +193,7 @@ struct hns3_mbx_pf_to_vf_cmd { uint8_t msg_len; uint8_t rsv1; uint16_t match_id; - uint16_t msg[8]; + struct hns3_pf_to_vf_msg msg; }; struct hns3_pf_rst_done_cmd { -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v4 3/4] net/hns3: refactor send mailbox function 2023-12-08 6:55 ` [PATCH v4 " Jie Hai 2023-12-08 6:55 ` [PATCH v4 1/4] net/hns3: refactor VF mailbox message struct Jie Hai 2023-12-08 6:55 ` [PATCH v4 2/4] net/hns3: refactor PF " Jie Hai @ 2023-12-08 6:55 ` Jie Hai 2023-12-08 6:55 ` [PATCH v4 4/4] net/hns3: refactor handle " Jie Hai 2023-12-08 10:48 ` [PATCH v4 0/4] net/hns3: refactor mailbox Ferruh Yigit 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-08 6:55 UTC (permalink / raw) To: dev, Yisen Zhuang, Wei Hu (Xavier), Ferruh Yigit, Huisong Li, Hao Chen, Chunsong Feng Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The 'hns3_send_mbx_msg' function has following problem: 1. the name is vague, missing caller indication 2. too many input parameters because the filling messages are placed in commands the send command. Therefore, a common interface is encapsulated to fill in the mailbox message before sending it. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev_vf.c | 141 ++++++++++++++++++------------ drivers/net/hns3/hns3_mbx.c | 50 ++++------- drivers/net/hns3/hns3_mbx.h | 8 +- drivers/net/hns3/hns3_rxtx.c | 18 ++-- 4 files changed, 116 insertions(+), 101 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 19e734ca8d8e..b0d0c29df191 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -91,11 +91,13 @@ hns3vf_add_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes, - RTE_ETHER_ADDR_LEN, false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -110,12 +112,13 @@ hns3vf_remove_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -134,6 +137,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *old_addr; uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; /* @@ -146,9 +150,10 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes, RTE_ETHER_ADDR_LEN); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes, - HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_MODIFY); + memcpy(req.data, addr_bytes, HNS3_TWO_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) { /* * The hns3 VF PMD depends on the hns3 PF kernel ethdev @@ -185,12 +190,13 @@ hns3vf_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_ADD, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -206,12 +212,13 @@ hns3vf_remove_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -348,7 +355,6 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { -#define HNS3_RING_VERCTOR_DATA_SIZE 14 struct hns3_vf_to_pf_msg req = {0}; const char *op_str; int ret; @@ -365,8 +371,7 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, req.ring_param[0].ring_type = queue_type; req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, - HNS3_RING_VERCTOR_DATA_SIZE, false, NULL, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", op_str, queue_id, req.vector_id, ret); @@ -452,10 +457,12 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) static int hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu) { + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu, - sizeof(mtu), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MTU, 0); + memcpy(req.data, &mtu, sizeof(mtu)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret); @@ -646,12 +653,13 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) uint16_t val = HNS3_PF_PUSH_LSC_CAP_NOT_SUPPORTED; uint16_t exp = HNS3_PF_PUSH_LSC_CAP_UNKNOWN; struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; __atomic_store_n(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, __ATOMIC_RELEASE); - (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + (void)hns3vf_mbx_send(hw, &req, false, NULL, 0); while (remain_ms > 0) { rte_delay_ms(HNS3_POLL_RESPONE_MS); @@ -746,12 +754,13 @@ hns3vf_check_tqp_info(struct hns3_hw *hw) static int hns3vf_get_port_base_vlan_filter_state(struct hns3_hw *hw) { + struct hns3_vf_to_pf_msg req; uint8_t resp_msg; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_GET_PORT_BASE_VLAN_STATE, NULL, 0, - true, &resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_GET_PORT_BASE_VLAN_STATE); + ret = hns3vf_mbx_send(hw, &req, true, &resp_msg, sizeof(resp_msg)); if (ret) { if (ret == -ETIME) { /* @@ -792,10 +801,12 @@ hns3vf_get_queue_info(struct hns3_hw *hw) { #define HNS3VF_TQPS_RSS_INFO_LEN 6 uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true, - resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_QINFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, + resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); if (ret) { PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret); return ret; @@ -833,10 +844,11 @@ hns3vf_get_basic_info(struct hns3_hw *hw) { uint8_t resp_msg[HNS3_MBX_MAX_RESP_DATA_SIZE]; struct hns3_basic_info *basic_info; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_BASIC_INFO, 0, NULL, 0, - true, resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_BASIC_INFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, resp_msg, sizeof(resp_msg)); if (ret) { hns3_err(hw, "failed to get basic info from PF, ret = %d.", ret); @@ -856,10 +868,11 @@ static int hns3vf_get_host_mac_addr(struct hns3_hw *hw) { uint8_t host_mac[RTE_ETHER_ADDR_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0, - true, host_mac, RTE_ETHER_ADDR_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_MAC_ADDR, 0); + ret = hns3vf_mbx_send(hw, &req, true, host_mac, RTE_ETHER_ADDR_LEN); if (ret) { hns3_err(hw, "Failed to get mac addr from PF: %d", ret); return ret; @@ -908,6 +921,7 @@ static void hns3vf_request_link_info(struct hns3_hw *hw) { struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; bool send_req; int ret; @@ -919,8 +933,8 @@ hns3vf_request_link_info(struct hns3_hw *hw) if (!send_req) return; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_err(hw, "failed to fetch link status, ret = %d", ret); return; @@ -964,16 +978,18 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { - struct hns3_mbx_vlan_filter vlan_filter = {0}; + struct hns3_mbx_vlan_filter *vlan_filter; + struct hns3_vf_to_pf_msg req = {0}; struct hns3_hw *hw = &hns->hw; - vlan_filter.is_kill = on ? 0 : 1; - vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); - vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); + req.code = HNS3_MBX_SET_VLAN; + req.subcode = HNS3_MBX_VLAN_FILTER; + vlan_filter = (struct hns3_mbx_vlan_filter *)req.data; + vlan_filter->is_kill = on ? 0 : 1; + vlan_filter->proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter->vlan_id = rte_cpu_to_le_16(vlan_id); - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - (uint8_t *)&vlan_filter, sizeof(vlan_filter), - true, NULL, 0); + return hns3vf_mbx_send(hw, &req, true, NULL, 0); } static int @@ -1002,6 +1018,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) static int hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; @@ -1009,9 +1026,10 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) return 0; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_ENABLE_VLAN_FILTER, &msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_ENABLE_VLAN_FILTER); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "%s vlan filter failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1022,12 +1040,15 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) static int hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG, - &msg_data, sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_VLAN_RX_OFF_CFG); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "vf %s strip failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1171,11 +1192,13 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev) static int hns3vf_set_alive(struct hns3_hw *hw, bool alive) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; msg_data = alive ? 1 : 0; - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data, - sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_ALIVE, 0); + memcpy(req.data, &msg_data, sizeof(msg_data)); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static void @@ -1183,11 +1206,12 @@ hns3vf_keep_alive_handler(void *param) { struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; struct hns3_adapter *hns = eth_dev->data->dev_private; + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_KEEP_ALIVE, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "VF sends keeping alive cmd failed(=%d)", ret); @@ -1326,9 +1350,11 @@ hns3vf_init_hardware(struct hns3_adapter *hns) static int hns3vf_clear_vport_list(struct hns3_hw *hw) { - return hns3_send_mbx_msg(hw, HNS3_MBX_HANDLE_VF_TBL, - HNS3_MBX_VPORT_LIST_CLEAR, NULL, 0, false, - NULL, 0); + struct hns3_vf_to_pf_msg req; + + hns3vf_mbx_setup(&req, HNS3_MBX_HANDLE_VF_TBL, + HNS3_MBX_VPORT_LIST_CLEAR); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static int @@ -1797,12 +1823,13 @@ hns3vf_wait_hardware_ready(struct hns3_adapter *hns) static int hns3vf_prepare_reset(struct hns3_adapter *hns) { + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; if (hw->reset.level == HNS3_VF_FUNC_RESET) { - ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL, - 0, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) return ret; } diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index c90f5d59ba21..43195ff184b1 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -24,6 +24,14 @@ static const struct errno_respcode_map err_code_map[] = { {95, -EOPNOTSUPP}, }; +void +hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode) +{ + memset(req, 0, sizeof(struct hns3_vf_to_pf_msg)); + req->code = code; + req->subcode = subcode; +} + static int hns3_resp_to_errno(uint16_t resp_code) { @@ -118,45 +126,24 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) } int -hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len) +hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req, bool need_resp, + uint8_t *resp_data, uint16_t resp_len) { - struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_mbx_vf_to_pf_cmd *cmd; struct hns3_cmd_desc desc; - bool is_ring_vector_msg; int ret; - req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; - - /* first two bytes are reserved for code & subcode */ - if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { - hns3_err(hw, - "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); - return -EINVAL; - } - hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg.code = code; - is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || - (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || - (code == HNS3_MBX_GET_RING_VECTOR_MAP); - if (!is_ring_vector_msg) - req->msg.subcode = subcode; - if (msg_data) { - if (is_ring_vector_msg) - memcpy(&req->msg.vector_id, msg_data, msg_len); - else - memcpy(&req->msg.data, msg_data, msg_len); - } + cmd = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; + cmd->msg = *req; /* synchronous send */ if (need_resp) { - req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; + cmd->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; rte_spinlock_lock(&hw->mbx_resp.lock); - hns3_mbx_prepare_resp(hw, code, subcode); - req->match_id = hw->mbx_resp.match_id; + hns3_mbx_prepare_resp(hw, req->code, req->subcode); + cmd->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { rte_spinlock_unlock(&hw->mbx_resp.lock); @@ -165,7 +152,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return ret; } - ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len); + ret = hns3_get_mbx_resp(hw, req->code, req->subcode, + resp_data, resp_len); rte_spinlock_unlock(&hw->mbx_resp.lock); } else { /* asynchronous send */ diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 09780fcebdb2..2952b96ab6b3 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -208,7 +208,9 @@ struct hns3_pf_rst_done_cmd { struct hns3_hw; void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); -int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len); +void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, + uint8_t code, uint8_t subcode); +int hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req_msg, bool need_resp, + uint8_t *resp_data, uint16_t resp_len); #endif /* HNS3_MBX_H */ diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 09b7e90c7000..9087bcffed9b 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -686,13 +686,12 @@ hns3pf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) static int hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) { - uint8_t msg_data[2]; + struct hns3_vf_to_pf_msg req; int ret; - memcpy(msg_data, &queue_id, sizeof(uint16_t)); - - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + memcpy(req.data, &queue_id, sizeof(uint16_t)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "fail to reset tqp, queue_id = %u, ret = %d.", queue_id, ret); @@ -769,15 +768,14 @@ static int hns3vf_reset_all_tqps(struct hns3_hw *hw) { #define HNS3VF_RESET_ALL_TQP_DONE 1U + struct hns3_vf_to_pf_msg req; uint8_t reset_status; - uint8_t msg_data[2]; int ret; uint16_t i; - memset(msg_data, 0, sizeof(msg_data)); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, &reset_status, - sizeof(reset_status)); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, + &reset_status, sizeof(reset_status)); if (ret) { hns3_err(hw, "fail to send rcb reset mbx, ret = %d.", ret); return ret; -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v4 4/4] net/hns3: refactor handle mailbox function 2023-12-08 6:55 ` [PATCH v4 " Jie Hai ` (2 preceding siblings ...) 2023-12-08 6:55 ` [PATCH v4 3/4] net/hns3: refactor send mailbox function Jie Hai @ 2023-12-08 6:55 ` Jie Hai 2023-12-08 10:48 ` [PATCH v4 0/4] net/hns3: refactor mailbox Ferruh Yigit 4 siblings, 0 replies; 33+ messages in thread From: Jie Hai @ 2023-12-08 6:55 UTC (permalink / raw) To: dev, Yisen Zhuang, Min Hu (Connor), Chunsong Feng, Wei Hu (Xavier), Huisong Li, Hao Chen, Hongbo Zheng Cc: fengchengwen, liudongdong3, haijie1 From: Dengdui Huang <huangdengdui@huawei.com> The mailbox messages of the PF and VF are processed in the same function. The PF and VF call the same function to process the messages. This code is excessive coupling and isn't good for maintenance. Therefore, this patch separates the interfaces that handle PF mailbox message and handle VF mailbox message. Fixes: 463e748964f5 ("net/hns3: support mailbox") Fixes: 109e4dd1bd7a ("net/hns3: get link state change through mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Signed-off-by: Jie Hai <haijie1@huawei.com> --- drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 4 +- drivers/net/hns3/hns3_mbx.c | 69 ++++++++++++++++++++++++------- drivers/net/hns3/hns3_mbx.h | 3 +- 4 files changed, 58 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index ae81368f68ae..bccd9db0dd4d 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -380,7 +380,7 @@ hns3_interrupt_handler(void *param) hns3_warn(hw, "received reset interrupt"); hns3_schedule_reset(hns); } else if (event_cause == HNS3_VECTOR0_EVENT_MBX) { - hns3_dev_handle_mbx_msg(hw); + hns3pf_handle_mbx_msg(hw); } else if (event_cause != HNS3_VECTOR0_EVENT_PTP) { hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x " "ras_int_stat:0x%x cmdq_int_stat:0x%x", diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index b0d0c29df191..f5a7a2b1f46c 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -618,7 +618,7 @@ hns3vf_interrupt_handler(void *param) hns3_schedule_reset(hns); break; case HNS3VF_VECTOR0_EVENT_MBX: - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); break; default: break; @@ -670,7 +670,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE * mailbox from PF driver to get this capability. */ - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break; diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 43195ff184b1..9cdbc1668a17 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -78,7 +78,7 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return -EIO; } - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); if (hw->mbx_resp.received_match_resp) @@ -372,9 +372,57 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) } void -hns3_dev_handle_mbx_msg(struct hns3_hw *hw) +hns3pf_handle_mbx_msg(struct hns3_hw *hw) +{ + struct hns3_cmq_ring *crq = &hw->cmq.crq; + struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_cmd_desc *desc; + uint16_t flag; + + rte_spinlock_lock(&hw->cmq.crq.lock); + + while (!hns3_cmd_crq_empty(hw)) { + if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { + rte_spinlock_unlock(&hw->cmq.crq.lock); + return; + } + desc = &crq->desc[crq->next_to_use]; + req = (struct hns3_mbx_vf_to_pf_cmd *)desc->data; + + flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); + if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { + hns3_warn(hw, + "dropped invalid mailbox message, code = %u", + req->msg.code); + + /* dropping/not processing this invalid message */ + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + continue; + } + + switch (req->msg.code) { + case HNS3_MBX_PUSH_LINK_STATUS: + hns3pf_handle_link_change_event(hw, req); + break; + default: + hns3_err(hw, "received unsupported(%u) mbx msg", + req->msg.code); + break; + } + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + } + + /* Write back CMDQ_RQ header pointer, IMP need this pointer */ + hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use); + + rte_spinlock_unlock(&hw->cmq.crq.lock); +} + +void +hns3vf_handle_mbx_msg(struct hns3_hw *hw) { - struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_cmq_ring *crq = &hw->cmq.crq; struct hns3_mbx_pf_to_vf_cmd *req; struct hns3_cmd_desc *desc; @@ -385,7 +433,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) rte_spinlock_lock(&hw->cmq.crq.lock); handle_out = (rte_eal_process_type() != RTE_PROC_PRIMARY || - !rte_thread_is_intr()) && hns->is_vf; + !rte_thread_is_intr()); if (handle_out) { /* * Currently, any threads in the primary and secondary processes @@ -430,8 +478,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) continue; } - handle_out = hns->is_vf && desc->opcode == 0; - if (handle_out) { + if (desc->opcode == 0) { /* Message already processed by other thread */ crq->desc[crq->next_to_use].flag = 0; hns3_mbx_ring_ptr_move_crq(crq); @@ -448,16 +495,6 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) case HNS3_MBX_ASSERTING_RESET: hns3_handle_asserting_reset(hw, req); break; - case HNS3_MBX_PUSH_LINK_STATUS: - /* - * This message is reported by the firmware and is - * reported in 'struct hns3_mbx_vf_to_pf_cmd' format. - * Therefore, we should cast the req variable to - * 'struct hns3_mbx_vf_to_pf_cmd' and then process it. - */ - hns3pf_handle_link_change_event(hw, - (struct hns3_mbx_vf_to_pf_cmd *)req); - break; case HNS3_MBX_PUSH_VLAN_INFO: /* * When the PVID configuration status of VF device is diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 2952b96ab6b3..2b6cb8f513d0 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -207,7 +207,8 @@ struct hns3_pf_rst_done_cmd { ((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num) struct hns3_hw; -void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); +void hns3pf_handle_mbx_msg(struct hns3_hw *hw); +void hns3vf_handle_mbx_msg(struct hns3_hw *hw); void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode); int hns3vf_mbx_send(struct hns3_hw *hw, -- 2.30.0 ^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v4 0/4] net/hns3: refactor mailbox 2023-12-08 6:55 ` [PATCH v4 " Jie Hai ` (3 preceding siblings ...) 2023-12-08 6:55 ` [PATCH v4 4/4] net/hns3: refactor handle " Jie Hai @ 2023-12-08 10:48 ` Ferruh Yigit 4 siblings, 0 replies; 33+ messages in thread From: Ferruh Yigit @ 2023-12-08 10:48 UTC (permalink / raw) To: Jie Hai, dev; +Cc: lihuisong, fengchengwen, liudongdong3 On 12/8/2023 6:55 AM, Jie Hai wrote: > This patchset refactors mailbox codes. > We will send a patch fix for all __atomic_xxx, so this > patchset still use __atomic_xxx. > > -- > v4: > 1. use __atomic_xxx instead of rte_atomic_XXX. > 2. use '__rte_packed' instead of '#pragma pack()'. > v3: > 1. fix the checkpatch warning on __atomic_xxx. > -- > > Dengdui Huang (4): > net/hns3: refactor VF mailbox message struct > net/hns3: refactor PF mailbox message struct > net/hns3: refactor send mailbox function > net/hns3: refactor handle mailbox function > Series applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2023-12-08 10:49 UTC | newest] Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-11-08 3:44 [PATCH 0/5] net/hns3: fix and refactor mailbox code Jie Hai 2023-11-08 3:44 ` [PATCH 1/5] net/hns3: fix sync mailbox failure forever Jie Hai 2023-11-08 3:44 ` [PATCH 2/5] net/hns3: refactor VF mailbox message struct Jie Hai 2023-11-09 18:51 ` Ferruh Yigit 2023-11-08 3:44 ` [PATCH 3/5] net/hns3: refactor PF " Jie Hai 2023-11-08 3:44 ` [PATCH 4/5] net/hns3: refactor send mailbox function Jie Hai 2023-11-08 3:44 ` [PATCH 5/5] net/hns3: refactor handle " Jie Hai 2023-11-09 18:50 ` [PATCH 0/5] net/hns3: fix and refactor mailbox code Ferruh Yigit 2023-11-10 6:21 ` Jie Hai 2023-11-10 12:35 ` Ferruh Yigit 2023-11-10 6:13 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Jie Hai 2023-11-10 6:13 ` [PATCH v2 1/6] net/hns3: fix sync mailbox failure forever Jie Hai 2023-11-10 6:13 ` [PATCH v2 2/6] net/hns3: use stdatomic API Jie Hai 2023-11-10 6:13 ` [PATCH v2 3/6] net/hns3: refactor VF mailbox message struct Jie Hai 2023-11-10 6:13 ` [PATCH v2 4/6] net/hns3: refactor PF " Jie Hai 2023-11-10 6:13 ` [PATCH v2 5/6] net/hns3: refactor send mailbox function Jie Hai 2023-11-10 16:23 ` Ferruh Yigit 2023-11-10 6:13 ` [PATCH v2 6/6] net/hns3: refactor handle " Jie Hai 2023-11-10 16:12 ` [PATCH v2 0/6] net/hns3: fix and refactor some codes Ferruh Yigit 2023-12-07 1:37 ` [PATCH v3 0/4] net/hns3: refactor mailbox Jie Hai 2023-12-07 1:37 ` [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct Jie Hai 2023-12-07 12:47 ` Ferruh Yigit 2023-12-07 1:37 ` [PATCH v3 2/4] net/hns3: refactor PF " Jie Hai 2023-12-07 1:37 ` [PATCH v3 3/4] net/hns3: refactor send mailbox function Jie Hai 2023-12-07 1:37 ` [PATCH v3 4/4] net/hns3: refactor handle " Jie Hai 2023-12-07 12:31 ` [PATCH v3 0/4] net/hns3: refactor mailbox Ferruh Yigit 2023-12-08 1:06 ` Jie Hai 2023-12-08 6:55 ` [PATCH v4 " Jie Hai 2023-12-08 6:55 ` [PATCH v4 1/4] net/hns3: refactor VF mailbox message struct Jie Hai 2023-12-08 6:55 ` [PATCH v4 2/4] net/hns3: refactor PF " Jie Hai 2023-12-08 6:55 ` [PATCH v4 3/4] net/hns3: refactor send mailbox function Jie Hai 2023-12-08 6:55 ` [PATCH v4 4/4] net/hns3: refactor handle " Jie Hai 2023-12-08 10:48 ` [PATCH v4 0/4] net/hns3: refactor mailbox Ferruh Yigit
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).