DPDK patches and discussions
 help / color / Atom feed
* [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling
@ 2020-09-10  7:20 Matan Azrad
  2020-09-18 10:33 ` Maxime Coquelin
  2020-09-18 12:29 ` Maxime Coquelin
  0 siblings, 2 replies; 3+ messages in thread
From: Matan Azrad @ 2020-09-10  7:20 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, stable

The CQ polling is done in order to notify the guest about new traffic
bursts and to release FW resources for the next bursts management.

When HW is faster than SW, it may be that all the FW resources are busy
in SW due to late polling.
In this case, due to wrong WQE counter masking, the fullness
calculation of the completions number is 0 while the queue is full.

Change the WQE counter masking to 16-bit wideness instead of the CQ
size mask as defined by the CQE format.

Fixes: c5f714e50b0e ("vdpa/mlx5: optimize completion queue poll")
Cc: stable@dpdk.org

Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_event.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 5a2d4fb..2672935 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -172,7 +172,7 @@
 	cq->callfd = callfd;
 	/* Init CQ to ones to be in HW owner in the start. */
 	cq->cqes[0].op_own = MLX5_CQE_OWNER_MASK;
-	cq->cqes[0].wqe_counter = rte_cpu_to_be_16(cq_size - 1);
+	cq->cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX);
 	/* First arming. */
 	mlx5_vdpa_cq_arm(priv, cq);
 	return 0;
@@ -187,7 +187,6 @@
 	struct mlx5_vdpa_event_qp *eqp =
 				container_of(cq, struct mlx5_vdpa_event_qp, cq);
 	const unsigned int cq_size = 1 << cq->log_desc_n;
-	const unsigned int cq_mask = cq_size - 1;
 	union {
 		struct {
 			uint16_t wqe_counter;
@@ -196,13 +195,13 @@
 		};
 		uint32_t word;
 	} last_word;
-	uint16_t next_wqe_counter = cq->cq_ci & cq_mask;
+	uint16_t next_wqe_counter = cq->cq_ci;
 	uint16_t cur_wqe_counter;
 	uint16_t comp;
 
 	last_word.word = rte_read32(&cq->cqes[0].wqe_counter);
 	cur_wqe_counter = rte_be_to_cpu_16(last_word.wqe_counter);
-	comp = (cur_wqe_counter + 1u - next_wqe_counter) & cq_mask;
+	comp = cur_wqe_counter + (uint16_t)1 - next_wqe_counter;
 	if (comp) {
 		cq->cq_ci += comp;
 		MLX5_ASSERT(!!(cq->cq_ci & cq_size) ==
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling
  2020-09-10  7:20 [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling Matan Azrad
@ 2020-09-18 10:33 ` Maxime Coquelin
  2020-09-18 12:29 ` Maxime Coquelin
  1 sibling, 0 replies; 3+ messages in thread
From: Maxime Coquelin @ 2020-09-18 10:33 UTC (permalink / raw)
  To: Matan Azrad; +Cc: dev, stable



On 9/10/20 9:20 AM, Matan Azrad wrote:
> The CQ polling is done in order to notify the guest about new traffic
> bursts and to release FW resources for the next bursts management.
> 
> When HW is faster than SW, it may be that all the FW resources are busy
> in SW due to late polling.
> In this case, due to wrong WQE counter masking, the fullness
> calculation of the completions number is 0 while the queue is full.
> 
> Change the WQE counter masking to 16-bit wideness instead of the CQ
> size mask as defined by the CQE format.
> 
> Fixes: c5f714e50b0e ("vdpa/mlx5: optimize completion queue poll")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Matan Azrad <matan@nvidia.com>
> Acked-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa_event.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 


Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling
  2020-09-10  7:20 [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling Matan Azrad
  2020-09-18 10:33 ` Maxime Coquelin
@ 2020-09-18 12:29 ` Maxime Coquelin
  1 sibling, 0 replies; 3+ messages in thread
From: Maxime Coquelin @ 2020-09-18 12:29 UTC (permalink / raw)
  To: Matan Azrad; +Cc: dev, stable



On 9/10/20 9:20 AM, Matan Azrad wrote:
> The CQ polling is done in order to notify the guest about new traffic
> bursts and to release FW resources for the next bursts management.
> 
> When HW is faster than SW, it may be that all the FW resources are busy
> in SW due to late polling.
> In this case, due to wrong WQE counter masking, the fullness
> calculation of the completions number is 0 while the queue is full.
> 
> Change the WQE counter masking to 16-bit wideness instead of the CQ
> size mask as defined by the CQE format.
> 
> Fixes: c5f714e50b0e ("vdpa/mlx5: optimize completion queue poll")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Matan Azrad <matan@nvidia.com>
> Acked-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa_event.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)

Applied to dpdk-next-virtio/master.

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-10  7:20 [dpdk-dev] [PATCH] vdpa/mlx5: fix completion queue polling Matan Azrad
2020-09-18 10:33 ` Maxime Coquelin
2020-09-18 12:29 ` Maxime Coquelin

DPDK patches and discussions

Archives are clonable:
	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox