From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1919BA00C2; Tue, 1 Nov 2022 17:48:13 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE70040693; Tue, 1 Nov 2022 17:48:12 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 59E9D40223 for ; Tue, 1 Nov 2022 17:48:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667321291; x=1698857291; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=fH1NceygD1uvJi3IVFJW6uo+yjXF6/yTPFZA+OP0o4w=; b=AlfSuLd5i31KPwlWsQSq7iXgo0Ilzmcbd6Dr82zxQF3l7cPbez2O/I9q 8QzpgEBmWZAGQmurbOT63+LSajK+qnOy4qZLMLDcpxDLxZgASdBJb6Wfx eT439b35NHwH8oGHErlV7Z1H4CDWVZAnoQdkRpCiiEf6gD92g+KX2nfxu iKV3q00ICNCKByUWe8QD26F9krNVK0jPQ3nSgBb6C19UYuiBeJ9pSEuiy hYl8M0X5BcDPQ5svctthGwY/0VPyYFTYpHO2yrppdp6XWqTEPdoRQtL3W nzarR7/GOJBJ+R1U28LSLzwace+fqoEuQhhsFKSE2LzIVROSyFB5lWrAZ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10518"; a="296616323" X-IronPort-AV: E=Sophos;i="5.95,231,1661842800"; d="scan'208";a="296616323" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2022 09:48:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10518"; a="665210160" X-IronPort-AV: E=Sophos;i="5.95,231,1661842800"; d="scan'208";a="665210160" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.163]) by orsmga008.jf.intel.com with ESMTP; 01 Nov 2022 09:48:08 -0700 From: Ciara Power To: Kai Ji Cc: dev@dpdk.org, david.coyle@intel.com, Ciara Power , Kevin O'Sullivan Subject: [PATCH] crypto/scheduler: fix session retrieval for ops Date: Tue, 1 Nov 2022 16:48:02 +0000 Message-Id: <20221101164802.4094474-1-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In cases where some ops failed to enqueue, the op session was never being reset. This resulted in a segmentation fault when processing ops the next time. To fix this, only set the op session after the failure condition is checked. Also, the incorrect ops index was being used for session retrieval when dequeueing for the secondary worker. Fixes: 6812b9bf470e ("crypto/scheduler: use unified session") Reported-by: Kevin O'Sullivan Signed-off-by: Ciara Power --- drivers/crypto/scheduler/scheduler_failover.c | 8 ++++- .../scheduler/scheduler_pkt_size_distr.c | 30 +++++++++---------- 2 files changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c index 7fadcf66d0..f24d2fc44b 100644 --- a/drivers/crypto/scheduler/scheduler_failover.c +++ b/drivers/crypto/scheduler/scheduler_failover.c @@ -50,12 +50,18 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) enqueued_ops = failover_worker_enqueue(&qp_ctx->primary_worker, ops, nb_ops, PRIMARY_WORKER_IDX); - if (enqueued_ops < nb_ops) + if (enqueued_ops < nb_ops) { + scheduler_retrieve_session(&ops[enqueued_ops], + nb_ops - enqueued_ops); enqueued_ops += failover_worker_enqueue( &qp_ctx->secondary_worker, &ops[enqueued_ops], nb_ops - enqueued_ops, SECONDARY_WORKER_IDX); + if (enqueued_ops < nb_ops) + scheduler_retrieve_session(&ops[enqueued_ops], + nb_ops - enqueued_ops); + } return enqueued_ops; } diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c index 41f05e6a47..0c51fff930 100644 --- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c +++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c @@ -89,9 +89,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) ops[i]->sym->auth.data.length; /* decide the target op based on the job length */ target[0] = !(job_len[0] & psd_qp_ctx->threshold); - if (ops[i]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ops[i]->sym->session = - sess_ctx[0]->worker_sess[target[0]]; p_enq_op = &enq_ops[target[0]]; /* stop schedule cops before the queue is full, this shall @@ -103,6 +100,9 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) break; } + if (ops[i]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + ops[i]->sym->session = + sess_ctx[0]->worker_sess[target[0]]; sched_ops[p_enq_op->worker_idx][p_enq_op->pos] = ops[i]; p_enq_op->pos++; @@ -110,9 +110,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) job_len[1] += (ops[i + 1]->sym->cipher.data.length == 0) * ops[i+1]->sym->auth.data.length; target[1] = !(job_len[1] & psd_qp_ctx->threshold); - if (ops[i + 1]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ops[i + 1]->sym->session = - sess_ctx[1]->worker_sess[target[1]]; p_enq_op = &enq_ops[target[1]]; if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx] == @@ -121,6 +118,9 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) break; } + if (ops[i + 1]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + ops[i + 1]->sym->session = + sess_ctx[1]->worker_sess[target[1]]; sched_ops[p_enq_op->worker_idx][p_enq_op->pos] = ops[i+1]; p_enq_op->pos++; @@ -128,9 +128,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) job_len[2] += (ops[i + 2]->sym->cipher.data.length == 0) * ops[i + 2]->sym->auth.data.length; target[2] = !(job_len[2] & psd_qp_ctx->threshold); - if (ops[i + 2]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ops[i + 2]->sym->session = - sess_ctx[2]->worker_sess[target[2]]; p_enq_op = &enq_ops[target[2]]; if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx] == @@ -139,6 +136,9 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) break; } + if (ops[i + 2]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + ops[i + 2]->sym->session = + sess_ctx[2]->worker_sess[target[2]]; sched_ops[p_enq_op->worker_idx][p_enq_op->pos] = ops[i+2]; p_enq_op->pos++; @@ -146,9 +146,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) job_len[3] += (ops[i + 3]->sym->cipher.data.length == 0) * ops[i + 3]->sym->auth.data.length; target[3] = !(job_len[3] & psd_qp_ctx->threshold); - if (ops[i + 3]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ops[i + 3]->sym->session = - sess_ctx[3]->worker_sess[target[3]]; p_enq_op = &enq_ops[target[3]]; if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx] == @@ -157,6 +154,9 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) break; } + if (ops[i + 3]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + ops[i + 3]->sym->session = + sess_ctx[3]->worker_sess[target[3]]; sched_ops[p_enq_op->worker_idx][p_enq_op->pos] = ops[i+3]; p_enq_op->pos++; } @@ -171,8 +171,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) job_len += (ops[i]->sym->cipher.data.length == 0) * ops[i]->sym->auth.data.length; target = !(job_len & psd_qp_ctx->threshold); - if (ops[i]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) - ops[i]->sym->session = sess_ctx->worker_sess[target]; p_enq_op = &enq_ops[target]; if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx] == @@ -181,6 +179,8 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) break; } + if (ops[i]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) + ops[i]->sym->session = sess_ctx->worker_sess[target]; sched_ops[p_enq_op->worker_idx][p_enq_op->pos] = ops[i]; p_enq_op->pos++; } @@ -251,7 +251,7 @@ schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) nb_deq_ops_sec = rte_cryptodev_dequeue_burst(worker->dev_id, worker->qp_id, &ops[nb_deq_ops_pri], nb_ops - nb_deq_ops_pri); - scheduler_retrieve_session(ops, nb_deq_ops_sec); + scheduler_retrieve_session(&ops[nb_deq_ops_pri], nb_deq_ops_sec); worker->nb_inflight_cops -= nb_deq_ops_sec; if (!worker->nb_inflight_cops) -- 2.25.1