From: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
To: <gakhil@marvell.com>
Cc: <dev@dpdk.org>, Nagadheeraj Rottela <rnagadheeraj@marvell.com>,
<stable@dpdk.org>
Subject: [PATCH 1/2] crypto/nitrox: fix panic with higher mbuf segments
Date: Thu, 17 Aug 2023 17:15:56 +0530 [thread overview]
Message-ID: <20230817114557.25574-2-rnagadheeraj@marvell.com> (raw)
In-Reply-To: <20230817114557.25574-1-rnagadheeraj@marvell.com>
When the number of segments in source or destination mbuf is higher than
max supported then the application was panicked during the creation of
sglist when RTE_VERIFY was called. Validate the number of mbuf segments
and return an error instead of panicking.
Fixes: 678f3eca1dfd ("crypto/nitrox: support cipher-only operations")
Fixes: 9282bdee5cdf ("crypto/nitrox: add cipher auth chain processing")
Cc: stable@dpdk.org
Signed-off-by: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
---
drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
index 9edb0cc00f..d7e8ff7db4 100644
--- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
+++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c
@@ -10,8 +10,11 @@
#include "nitrox_sym_reqmgr.h"
#include "nitrox_logs.h"
-#define MAX_SGBUF_CNT 16
-#define MAX_SGCOMP_CNT 5
+#define MAX_SUPPORTED_MBUF_SEGS 16
+/* IV + AAD + ORH + CC + DIGEST */
+#define ADDITIONAL_SGBUF_CNT 5
+#define MAX_SGBUF_CNT (MAX_SUPPORTED_MBUF_SEGS + ADDITIONAL_SGBUF_CNT)
+#define MAX_SGCOMP_CNT (RTE_ALIGN_MUL_CEIL(MAX_SGBUF_CNT, 4) / 4)
/* SLC_STORE_INFO */
#define MIN_UDD_LEN 16
/* PKT_IN_HDR + SLC_STORE_INFO */
@@ -303,7 +306,7 @@ create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf,
datalen -= mlen;
}
- RTE_VERIFY(cnt <= MAX_SGBUF_CNT);
+ RTE_ASSERT(cnt <= MAX_SGBUF_CNT);
sgtbl->map_bufs_cnt = cnt;
return 0;
}
@@ -375,7 +378,7 @@ create_cipher_outbuf(struct nitrox_softreq *sr)
sr->out.sglist[cnt].virt = &sr->resp.completion;
cnt++;
- RTE_VERIFY(cnt <= MAX_SGBUF_CNT);
+ RTE_ASSERT(cnt <= MAX_SGBUF_CNT);
sr->out.map_bufs_cnt = cnt;
create_sgcomp(&sr->out);
@@ -600,7 +603,7 @@ create_aead_outbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest)
resp.completion);
sr->out.sglist[cnt].virt = &sr->resp.completion;
cnt++;
- RTE_VERIFY(cnt <= MAX_SGBUF_CNT);
+ RTE_ASSERT(cnt <= MAX_SGBUF_CNT);
sr->out.map_bufs_cnt = cnt;
create_sgcomp(&sr->out);
@@ -774,6 +777,14 @@ nitrox_process_se_req(uint16_t qno, struct rte_crypto_op *op,
{
int err;
+ if (unlikely(op->sym->m_src->nb_segs > MAX_SUPPORTED_MBUF_SEGS ||
+ (op->sym->m_dst &&
+ op->sym->m_dst->nb_segs > MAX_SUPPORTED_MBUF_SEGS))) {
+ NITROX_LOG(ERR, "Mbuf segments not supported. "
+ "Max supported %d\n", MAX_SUPPORTED_MBUF_SEGS);
+ return -ENOTSUP;
+ }
+
softreq_init(sr, sr->iova);
sr->ctx = ctx;
sr->op = op;
--
2.13.6
parent reply other threads:[~2023-08-17 11:46 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <20230817114557.25574-1-rnagadheeraj@marvell.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230817114557.25574-2-rnagadheeraj@marvell.com \
--to=rnagadheeraj@marvell.com \
--cc=dev@dpdk.org \
--cc=gakhil@marvell.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).