patches for DPDK stable branches
 help / color / mirror / Atom feed
From: peng1x.zhang@intel.com
To: qiming.yang@intel.com, qi.z.zhang@intel.com, dev@dpdk.org
Cc: Peng Zhang <peng1x.zhang@intel.com>, stable@dpdk.org
Subject: [PATCH v5] net/ice: add retry mechanism for DCF after failure
Date: Wed,  8 Jun 2022 10:21:25 +0000	[thread overview]
Message-ID: <20220608102125.794566-1-peng1x.zhang@intel.com> (raw)
In-Reply-To: <20220531174827.357629-1-peng1x.zhang@intel.com>

From: Peng Zhang <peng1x.zhang@intel.com>

The origin design is if error happen during the step 3 of given situation, 
it will return error directly without retry. While in current patch, it 
will retry at every interval time during certain time. If retry succeed, 
rule can be continuously created.

The given situation as following steps show:
step 1. Kernel PF and DPDK DCF are ready at the beginning.
step 2. A VF reset happen, kernel send an event to DCF and set STATE
to pause.
step 3. Before DCF receive the event, it is possible a rule creation is 
ongoing and virtual channel command from DCF to kernel PF is executing.
step 4. Then result of command is failure, it will lead to error return 
to DCF. Error code will be set as EINVAL, not EAGAIN.

Fixes: daa714d55c72 ("net/ice: handle AdminQ command by DCF")
Cc: stable@dpdk.org

Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
---
 v5 changes:
 - Add retry mechanism for DCF if the result of sending virtual channel 
   command is failure.
 v4 changes:
 - Add retry mechanism if the result of sending adminQ queue command is
   failure.
 v3 Changes:
 - Add the situation description, expected error code and incorrect error 
   code in commit log.
 v2 Changes:
 - Modify DCF state checking mechanism.

 drivers/net/ice/ice_dcf.c | 34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 55ae68c456..008e7fbf27 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -474,7 +474,7 @@ ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 	struct dcf_virtchnl_cmd desc_cmd, buff_cmd;
 	struct ice_dcf_hw *hw = dcf_hw;
 	int err = 0;
-	int i = 0;
+	int i, j = 0;
 
 	if ((buf && !buf_size) || (!buf && buf_size) ||
 	    buf_size > ICE_DCF_AQ_BUF_SZ)
@@ -501,25 +501,33 @@ ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 	ice_dcf_vc_cmd_set(hw, &desc_cmd);
 	ice_dcf_vc_cmd_set(hw, &buff_cmd);
 
-	if (ice_dcf_vc_cmd_send(hw, &desc_cmd) ||
-	    ice_dcf_vc_cmd_send(hw, &buff_cmd)) {
-		err = -1;
-		PMD_DRV_LOG(ERR, "fail to send OP_DCF_CMD_DESC/BUFF");
-		goto ret;
-	}
-
 	do {
-		if (!desc_cmd.pending && !buff_cmd.pending)
+		if (ice_dcf_vc_cmd_send(hw, &desc_cmd) ||
+		    ice_dcf_vc_cmd_send(hw, &buff_cmd)) {
+			err = -1;
+			PMD_DRV_LOG(ERR, "fail to send OP_DCF_CMD_DESC/BUFF");
+			goto ret;
+		}
+
+		i = 0;
+		do {
+			if (!desc_cmd.pending && !buff_cmd.pending)
+				break;
+
+			rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
+		} while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
+
+		if (desc_cmd.v_ret == IAVF_SUCCESS && buff_cmd.v_ret == IAVF_SUCCESS)
 			break;
 
 		rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
-	} while (i++ < ICE_DCF_ARQ_MAX_RETRIES);
-
+	} while (j++ < ICE_DCF_ARQ_MAX_RETRIES);
+
 	if (desc_cmd.v_ret != IAVF_SUCCESS || buff_cmd.v_ret != IAVF_SUCCESS) {
 		err = -1;
 		PMD_DRV_LOG(ERR,
-			    "No response (%d times) or return failure (desc: %d / buff: %d)",
-			    i, desc_cmd.v_ret, buff_cmd.v_ret);
+			    "No response (%d times) or return failure (desc: %d / buff: %d)"
+			    " after retry %d times cmd", i, desc_cmd.v_ret, buff_cmd.v_ret, j);
 	}
 
 ret:
-- 
2.25.1


      parent reply	other threads:[~2022-06-08  2:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-11 15:49 [PATCH v2] net/ice: fix DCF state checking mechanism peng1x.zhang
2022-05-13  9:56 ` Connolly, Padraig J
2022-05-17  7:35 ` Zhang, Qi Z
2022-05-18  6:36   ` Zhang, Peng1X
2022-05-18  6:45     ` Zhang, Qi Z
2022-05-19  6:05       ` Zhang, Peng1X
2022-05-20 18:31 ` [PATCH v3] " peng1x.zhang
2022-05-21  2:17   ` Zhang, Qi Z
2022-05-31 17:48   ` [PATCH v4] net/ice: retry sending adminQ command after failure peng1x.zhang
2022-05-31 11:51     ` Zhang, Qi Z
2022-06-01  1:48       ` Zhang, Peng1X
2022-06-01  1:53         ` Zhang, Qi Z
2022-06-01  2:46           ` Zhang, Peng1X
2022-06-01  3:22             ` Zhang, Qi Z
2022-06-01  5:35               ` Zhang, Peng1X
2022-06-08 10:21     ` peng1x.zhang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220608102125.794566-1-peng1x.zhang@intel.com \
    --to=peng1x.zhang@intel.com \
    --cc=dev@dpdk.org \
    --cc=qi.z.zhang@intel.com \
    --cc=qiming.yang@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).