From: peng1x.zhang@intel.com To: qiming.yang@intel.com, qi.z.zhang@intel.com, dev@dpdk.org Cc: Peng Zhang <peng1x.zhang@intel.com>, stable@dpdk.org Subject: [PATCH v5] net/ice: add retry mechanism for DCF after failure Date: Wed, 8 Jun 2022 10:21:25 +0000 Message-ID: <20220608102125.794566-1-peng1x.zhang@intel.com> (raw) In-Reply-To: <20220531174827.357629-1-peng1x.zhang@intel.com> From: Peng Zhang <peng1x.zhang@intel.com> The origin design is if error happen during the step 3 of given situation, it will return error directly without retry. While in current patch, it will retry at every interval time during certain time. If retry succeed, rule can be continuously created. The given situation as following steps show: step 1. Kernel PF and DPDK DCF are ready at the beginning. step 2. A VF reset happen, kernel send an event to DCF and set STATE to pause. step 3. Before DCF receive the event, it is possible a rule creation is ongoing and virtual channel command from DCF to kernel PF is executing. step 4. Then result of command is failure, it will lead to error return to DCF. Error code will be set as EINVAL, not EAGAIN. Fixes: daa714d55c72 ("net/ice: handle AdminQ command by DCF") Cc: stable@dpdk.org Signed-off-by: Peng Zhang <peng1x.zhang@intel.com> --- v5 changes: - Add retry mechanism for DCF if the result of sending virtual channel command is failure. v4 changes: - Add retry mechanism if the result of sending adminQ queue command is failure. v3 Changes: - Add the situation description, expected error code and incorrect error code in commit log. v2 Changes: - Modify DCF state checking mechanism. drivers/net/ice/ice_dcf.c | 34 +++++++++++++++++++++------------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 55ae68c456..008e7fbf27 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -474,7 +474,7 @@ ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc, struct dcf_virtchnl_cmd desc_cmd, buff_cmd; struct ice_dcf_hw *hw = dcf_hw; int err = 0; - int i = 0; + int i, j = 0; if ((buf && !buf_size) || (!buf && buf_size) || buf_size > ICE_DCF_AQ_BUF_SZ) @@ -501,25 +501,33 @@ ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc, ice_dcf_vc_cmd_set(hw, &desc_cmd); ice_dcf_vc_cmd_set(hw, &buff_cmd); - if (ice_dcf_vc_cmd_send(hw, &desc_cmd) || - ice_dcf_vc_cmd_send(hw, &buff_cmd)) { - err = -1; - PMD_DRV_LOG(ERR, "fail to send OP_DCF_CMD_DESC/BUFF"); - goto ret; - } - do { - if (!desc_cmd.pending && !buff_cmd.pending) + if (ice_dcf_vc_cmd_send(hw, &desc_cmd) || + ice_dcf_vc_cmd_send(hw, &buff_cmd)) { + err = -1; + PMD_DRV_LOG(ERR, "fail to send OP_DCF_CMD_DESC/BUFF"); + goto ret; + } + + i = 0; + do { + if (!desc_cmd.pending && !buff_cmd.pending) + break; + + rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); + } while (i++ < ICE_DCF_ARQ_MAX_RETRIES); + + if (desc_cmd.v_ret == IAVF_SUCCESS && buff_cmd.v_ret == IAVF_SUCCESS) break; rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); - } while (i++ < ICE_DCF_ARQ_MAX_RETRIES); - + } while (j++ < ICE_DCF_ARQ_MAX_RETRIES); + if (desc_cmd.v_ret != IAVF_SUCCESS || buff_cmd.v_ret != IAVF_SUCCESS) { err = -1; PMD_DRV_LOG(ERR, - "No response (%d times) or return failure (desc: %d / buff: %d)", - i, desc_cmd.v_ret, buff_cmd.v_ret); + "No response (%d times) or return failure (desc: %d / buff: %d)" + " after retry %d times cmd", i, desc_cmd.v_ret, buff_cmd.v_ret, j); } ret: -- 2.25.1
prev parent reply other threads:[~2022-06-08 2:28 UTC|newest] Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-05-11 15:49 [PATCH v2] net/ice: fix DCF state checking mechanism peng1x.zhang 2022-05-13 9:56 ` Connolly, Padraig J 2022-05-17 7:35 ` Zhang, Qi Z 2022-05-18 6:36 ` Zhang, Peng1X 2022-05-18 6:45 ` Zhang, Qi Z 2022-05-19 6:05 ` Zhang, Peng1X 2022-05-20 18:31 ` [PATCH v3] " peng1x.zhang 2022-05-21 2:17 ` Zhang, Qi Z 2022-05-31 17:48 ` [PATCH v4] net/ice: retry sending adminQ command after failure peng1x.zhang 2022-05-31 11:51 ` Zhang, Qi Z 2022-06-01 1:48 ` Zhang, Peng1X 2022-06-01 1:53 ` Zhang, Qi Z 2022-06-01 2:46 ` Zhang, Peng1X 2022-06-01 3:22 ` Zhang, Qi Z 2022-06-01 5:35 ` Zhang, Peng1X 2022-06-08 10:21 ` peng1x.zhang [this message]
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220608102125.794566-1-peng1x.zhang@intel.com \ --to=peng1x.zhang@intel.com \ --cc=dev@dpdk.org \ --cc=qi.z.zhang@intel.com \ --cc=qiming.yang@intel.com \ --cc=stable@dpdk.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
DPDK patches and discussions This inbox may be cloned and mirrored by anyone: git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \ dev@dpdk.org public-inbox-index dev Example config snippet for mirrors. Newsgroup available over NNTP: nntp://inbox.dpdk.org/inbox.dpdk.dev AGPL code for this site: git clone https://public-inbox.org/public-inbox.git