* [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues
@ 2022-04-22 10:46 Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 02/28] net/cnxk: add receive channel backpressure for SDP Nithin Dabilpuram
` (26 more replies)
0 siblings, 27 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Subrahmanyam Nilla
From: Subrahmanyam Nilla <snilla@marvell.com>
Currently only base channel number is configured as default
channel for all the SDP send queues. Due to this, packets
sent on different SQ's are landing on the same output queue
on the host. Channel number in the send queue should be
configured according to the number of queues assigned to the
SDP PF or VF device.
Signed-off-by: Subrahmanyam Nilla <snilla@marvell.com>
---
v2:
- Fixed compilation issue with some compilers in patch 24/24
- Added few more fixes net/cnxk and related code in common/cnxk
drivers/common/cnxk/roc_nix_queue.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 07dab4b..76c049c 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -706,6 +706,7 @@ static int
sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
uint16_t smq)
{
+ struct roc_nix *roc_nix = nix_priv_to_roc_nix(nix);
struct mbox *mbox = (&nix->dev)->mbox;
struct nix_aq_enq_req *aq;
@@ -721,7 +722,11 @@ sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
aq->sq.max_sqe_size = sq->max_sqe_sz;
aq->sq.smq = smq;
aq->sq.smq_rr_quantum = rr_quantum;
- aq->sq.default_chan = nix->tx_chan_base;
+ if (roc_nix_is_sdp(roc_nix))
+ aq->sq.default_chan =
+ nix->tx_chan_base + (sq->qid % nix->tx_chan_cnt);
+ else
+ aq->sq.default_chan = nix->tx_chan_base;
aq->sq.sqe_stype = NIX_STYPE_STF;
aq->sq.ena = 1;
aq->sq.sso_ena = !!sq->sso_ena;
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 02/28] net/cnxk: add receive channel backpressure for SDP
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 03/28] common/cnxk: add new pkind for CPT when ts is enabled Nithin Dabilpuram
` (25 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Radha Mohan Chintakuntla
From: Radha Mohan Chintakuntla <radhac@marvell.com>
The SDP interfaces also need to be configured for NIX receive channel
backpressure for packet receive.
Signed-off-by: Radha Mohan Chintakuntla <radhac@marvell.com>
---
drivers/common/cnxk/roc_nix_fc.c | 11 +++++------
drivers/net/cnxk/cnxk_ethdev.c | 3 +++
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index 8e31443..a0505bd 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -38,16 +38,13 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable)
struct nix_bp_cfg_rsp *rsp;
int rc = -ENOSPC, i;
- if (roc_nix_is_sdp(roc_nix))
- return 0;
-
if (enable) {
req = mbox_alloc_msg_nix_bp_enable(mbox);
if (req == NULL)
return rc;
req->chan_base = 0;
- if (roc_nix_is_lbk(roc_nix))
+ if (roc_nix_is_lbk(roc_nix) || roc_nix_is_sdp(roc_nix))
req->chan_cnt = NIX_LBK_MAX_CHAN;
else
req->chan_cnt = NIX_CGX_MAX_CHAN;
@@ -203,7 +200,8 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
int
roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
{
- if (roc_nix_is_vf_or_sdp(roc_nix) && !roc_nix_is_lbk(roc_nix))
+ if (!roc_nix_is_pf(roc_nix) && !roc_nix_is_lbk(roc_nix) &&
+ !roc_nix_is_sdp(roc_nix))
return 0;
if (fc_cfg->type == ROC_NIX_FC_CQ_CFG)
@@ -219,7 +217,8 @@ roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
int
roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
{
- if (roc_nix_is_vf_or_sdp(roc_nix) && !roc_nix_is_lbk(roc_nix))
+ if (!roc_nix_is_pf(roc_nix) && !roc_nix_is_lbk(roc_nix) &&
+ !roc_nix_is_sdp(roc_nix))
return 0;
if (fc_cfg->type == ROC_NIX_FC_CQ_CFG)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 1fa4131..bd31a9a 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -310,6 +310,9 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct cnxk_fc_cfg *fc = &dev->fc_cfg;
int rc;
+ if (roc_nix_is_sdp(&dev->nix))
+ return 0;
+
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
if (roc_model_is_cn96_ax() &&
dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG)
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 03/28] common/cnxk: add new pkind for CPT when ts is enabled
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 02/28] net/cnxk: add receive channel backpressure for SDP Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 04/28] common/cnxk: support to configure the ts pkind in CPT Nithin Dabilpuram
` (24 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Vidya Sagar Velumuri
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
With Timestamp enabled, time stamp will be added to second pass packets
from CPT. NPC needs different configuration to parse second pass packets
with and without timestamp.
New pkind is defined for CPT when time stamp is enabled on NIX.
CPT should use this PKIND for second pass packets when TS is enabled for
corresponding pktio.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_ie_ot.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/cnxk/roc_ie_ot.h b/drivers/common/cnxk/roc_ie_ot.h
index 173cc2c..56a1e9f 100644
--- a/drivers/common/cnxk/roc_ie_ot.h
+++ b/drivers/common/cnxk/roc_ie_ot.h
@@ -15,6 +15,7 @@
#define ROC_IE_OT_CTX_ILEN 2
/* PKIND to be used for CPT Meta parsing */
#define ROC_IE_OT_CPT_PKIND 58
+#define ROC_IE_OT_CPT_TS_PKIND 54
#define ROC_IE_OT_SA_CTX_HDR_SIZE 1
enum roc_ie_ot_ucc_ipsec {
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 04/28] common/cnxk: support to configure the ts pkind in CPT
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 02/28] net/cnxk: add receive channel backpressure for SDP Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 03/28] common/cnxk: add new pkind for CPT when ts is enabled Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-26 10:12 ` Ray Kinsella
2022-04-22 10:46 ` [PATCH v2 05/28] common/cnxk: fix SQ flush sequence Nithin Dabilpuram
` (23 subsequent siblings)
26 siblings, 1 reply; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Ray Kinsella
Cc: dev, Vidya Sagar Velumuri
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add new API to configure the SA table entries with new CPT PKIND
when timestamp is enabled.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 59 ++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_inl.h | 2 ++
drivers/common/cnxk/roc_nix_inl_priv.h | 1 +
drivers/common/cnxk/version.map | 1 +
4 files changed, 63 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 826c6e9..bfb33b1 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -1011,6 +1011,65 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
return -ENOTSUP;
}
+int
+roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena, bool inb_inl_dev)
+{
+ struct idev_cfg *idev = idev_get_cfg();
+ struct nix_inl_dev *inl_dev = NULL;
+ void *sa, *sa_base = NULL;
+ struct nix *nix = NULL;
+ uint16_t max_spi = 0;
+ uint8_t pkind = 0;
+ int i;
+
+ if (roc_model_is_cn9k())
+ return 0;
+
+ if (!inb_inl_dev && (roc_nix == NULL))
+ return -EINVAL;
+
+ if (inb_inl_dev) {
+ if ((idev == NULL) || (idev->nix_inl_dev == NULL))
+ return 0;
+ inl_dev = idev->nix_inl_dev;
+ } else {
+ nix = roc_nix_to_nix_priv(roc_nix);
+ if (!nix->inl_inb_ena)
+ return 0;
+ sa_base = nix->inb_sa_base;
+ max_spi = roc_nix->ipsec_in_max_spi;
+ }
+
+ if (inl_dev) {
+ if (inl_dev->rq_refs == 0) {
+ inl_dev->ts_ena = ts_ena;
+ max_spi = inl_dev->ipsec_in_max_spi;
+ sa_base = inl_dev->inb_sa_base;
+ } else if (inl_dev->ts_ena != ts_ena) {
+ if (inl_dev->ts_ena)
+ plt_err("Inline device is already configured with TS enable");
+ else
+ plt_err("Inline device is already configured with TS disable");
+ return -ENOTSUP;
+ } else {
+ return 0;
+ }
+ }
+
+ pkind = ts_ena ? ROC_IE_OT_CPT_TS_PKIND : ROC_IE_OT_CPT_PKIND;
+
+ sa = (uint8_t *)sa_base;
+ if (pkind == ((struct roc_ot_ipsec_inb_sa *)sa)->w0.s.pkind)
+ return 0;
+
+ for (i = 0; i < max_spi; i++) {
+ sa = ((uint8_t *)sa_base) +
+ (i * ROC_NIX_INL_OT_IPSEC_INB_SA_SZ);
+ ((struct roc_ot_ipsec_inb_sa *)sa)->w0.s.pkind = pkind;
+ }
+ return 0;
+}
+
void
roc_nix_inl_dev_lock(void)
{
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 2c2a4d7..633f090 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -174,6 +174,8 @@ int __roc_api roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix,
uint64_t __roc_api roc_nix_inl_dev_rq_limit_get(void);
int __roc_api roc_nix_reassembly_configure(uint32_t max_wait_time,
uint16_t max_frags);
+int __roc_api roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena,
+ bool inb_inl_dev);
/* NIX Inline Outbound API */
int __roc_api roc_nix_inl_outb_init(struct roc_nix *roc_nix);
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index 0fa5e09..f9646a3 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -76,6 +76,7 @@ struct nix_inl_dev {
uint32_t inb_spi_mask;
bool attach_cptlf;
bool wqe_skip;
+ bool ts_ena;
};
int nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev);
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 2a122e5..53586da 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -159,6 +159,7 @@ INTERNAL {
roc_nix_inl_outb_is_enabled;
roc_nix_inl_outb_soft_exp_poll_switch;
roc_nix_inl_sa_sync;
+ roc_nix_inl_ts_pkind_set;
roc_nix_inl_ctx_write;
roc_nix_inl_dev_pffunc_get;
roc_nix_cpt_ctx_cache_sync;
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 05/28] common/cnxk: fix SQ flush sequence
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (2 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 04/28] common/cnxk: support to configure the ts pkind in CPT Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 06/28] common/cnxk: skip probing SoC environment for CN9k Nithin Dabilpuram
` (22 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
From: Satha Rao <skoteshwar@marvell.com>
Fix SQ flush sequence to issue NIX RX SW Sync after SMQ flush.
This sync ensures that all the packets that were inflight are
flushed out of memory.
This patch also fixes NULL return issues reported by
static analysis tool in Traffic Manager and sync's mbox
to that of Kernel version.
Fixes: 05d727e8b14a ("common/cnxk: support NIX traffic management")
Fixes: 0b7e667ee303 ("common/cnxk: enable packet marking")
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
---
drivers/common/cnxk/roc_mbox.h | 35 +++++++++++++++++++++++++++++++++--
drivers/common/cnxk/roc_nix_tm.c | 7 +++++++
drivers/common/cnxk/roc_nix_tm_mark.c | 9 +++++++++
3 files changed, 49 insertions(+), 2 deletions(-)
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
index b608f58..2c30f19 100644
--- a/drivers/common/cnxk/roc_mbox.h
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -116,7 +116,7 @@ struct mbox_msghdr {
msg_rsp) \
M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \
sso_grp_priority) \
- M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \
+ M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, ssow_lf_inv_req, msg_rsp) \
M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \
msg_rsp) \
M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \
@@ -125,6 +125,9 @@ struct mbox_msghdr {
sso_hws_stats) \
M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \
sso_hw_xaq_release, msg_rsp) \
+ M(SSO_CONFIG_LSW, 0x612, ssow_config_lsw, ssow_config_lsw, msg_rsp) \
+ M(SSO_HWS_CHNG_MSHIP, 0x613, ssow_chng_mship, ssow_chng_mship, \
+ msg_rsp) \
/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \
tim_lf_alloc_rsp) \
@@ -259,7 +262,8 @@ struct mbox_msghdr {
M(NIX_CPT_BP_ENABLE, 0x8020, nix_cpt_bp_enable, nix_bp_cfg_req, \
nix_bp_cfg_rsp) \
M(NIX_CPT_BP_DISABLE, 0x8021, nix_cpt_bp_disable, nix_bp_cfg_req, \
- msg_rsp)
+ msg_rsp) \
+ M(NIX_RX_SW_SYNC, 0x8022, nix_rx_sw_sync, msg_req, msg_rsp)
/* Messages initiated by AF (range 0xC00 - 0xDFF) */
#define MBOX_UP_CGX_MESSAGES \
@@ -1268,6 +1272,33 @@ struct ssow_lf_free_req {
uint16_t __io hws;
};
+#define SSOW_INVAL_SELECTIVE_VER 0x1000
+struct ssow_lf_inv_req {
+ struct mbox_msghdr hdr;
+ uint16_t nb_hws; /* Number of HWS to invalidate*/
+ uint16_t hws[MAX_RVU_BLKLF_CNT]; /* Array of HWS */
+};
+
+struct ssow_config_lsw {
+ struct mbox_msghdr hdr;
+#define SSOW_LSW_DIS 0
+#define SSOW_LSW_GW_WAIT 1
+#define SSOW_LSW_GW_IMM 2
+ uint8_t __io lsw_mode;
+#define SSOW_WQE_REL_LSW_WAIT 0
+#define SSOW_WQE_REL_IMM 1
+ uint8_t __io wqe_release;
+};
+
+struct ssow_chng_mship {
+ struct mbox_msghdr hdr;
+ uint8_t __io set; /* Membership set to modify. */
+ uint8_t __io enable; /* Enable/Disable the hwgrps. */
+ uint8_t __io hws; /* HWS to modify. */
+ uint16_t __io nb_hwgrps; /* Number of hwgrps in the array */
+ uint16_t __io hwgrps[MAX_RVU_BLKLF_CNT]; /* Array of hwgrps. */
+};
+
struct sso_hw_setconfig {
struct mbox_msghdr hdr;
uint32_t __io npa_aura_id;
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 5b70c7b..42d3abd 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -590,6 +590,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
struct nix_tm_node *node, *sibling;
struct nix_tm_node_list *list;
enum roc_nix_tm_tree tree;
+ struct msg_req *req;
struct mbox *mbox;
struct nix *nix;
uint16_t qid;
@@ -679,6 +680,12 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
rc);
goto cleanup;
}
+
+ req = mbox_alloc_msg_nix_rx_sw_sync(mbox);
+ if (!req)
+ return -ENOSPC;
+
+ rc = mbox_process(mbox);
cleanup:
/* Restore cgx state */
if (!roc_nix->io_enabled) {
diff --git a/drivers/common/cnxk/roc_nix_tm_mark.c b/drivers/common/cnxk/roc_nix_tm_mark.c
index 64cf679..d37292e 100644
--- a/drivers/common/cnxk/roc_nix_tm_mark.c
+++ b/drivers/common/cnxk/roc_nix_tm_mark.c
@@ -110,6 +110,9 @@ nix_tm_update_red_algo(struct nix *nix, bool red_send)
/* Update txschq config */
req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+
req->lvl = tm_node->hw_lvl;
k = prepare_tm_shaper_red_algo(tm_node, req->reg, req->regval,
req->regval_mask);
@@ -208,6 +211,9 @@ nix_tm_mark_init(struct nix *nix)
/* Null mark format */
req = mbox_alloc_msg_nix_mark_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+
rc = mbox_process_msg(mbox, (void *)&rsp);
if (rc) {
plt_err("TM failed to alloc null mark format, rc=%d", rc);
@@ -220,6 +226,9 @@ nix_tm_mark_init(struct nix *nix)
for (i = 0; i < ROC_NIX_TM_MARK_MAX; i++) {
for (j = 0; j < ROC_NIX_TM_MARK_COLOR_MAX; j++) {
req = mbox_alloc_msg_nix_mark_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+
req->offset = mark_off[i];
switch (j) {
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 06/28] common/cnxk: skip probing SoC environment for CN9k
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (3 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 05/28] common/cnxk: fix SQ flush sequence Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 07/28] common/cnxk: fix issues in soft expiry disable path Nithin Dabilpuram
` (21 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Rakesh Kudurumalla
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
SoC run platform file is not present in CN9k so probing
is done for CN10k devices
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/common/cnxk/roc_model.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/roc_model.c
index 1dd374e..a68baa6 100644
--- a/drivers/common/cnxk/roc_model.c
+++ b/drivers/common/cnxk/roc_model.c
@@ -2,6 +2,9 @@
* Copyright(C) 2021 Marvell.
*/
+#include <fcntl.h>
+#include <unistd.h>
+
#include "roc_api.h"
#include "roc_priv.h"
@@ -211,6 +214,12 @@ of_env_get(struct roc_model *model)
uint64_t flag;
FILE *fp;
+ if (access(path, F_OK) != 0) {
+ strncpy(model->env, "HW_PLATFORM", ROC_MODEL_STR_LEN_MAX - 1);
+ model->flag |= ROC_ENV_HW;
+ return;
+ }
+
fp = fopen(path, "r");
if (!fp) {
plt_err("Failed to open %s", path);
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 07/28] common/cnxk: fix issues in soft expiry disable path
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (4 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 06/28] common/cnxk: skip probing SoC environment for CN9k Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 08/28] common/cnxk: convert warning to debug print Nithin Dabilpuram
` (20 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Fix issues in mode where soft expiry is disabled in RoC.
When soft expiry support is not enabled in inline device,
memory is not allocated for the ring base array and should
not be accessed.
Fixes: bea5d990a93b ("net/cnxk: support outbound soft expiry notification")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 9 +++++----
drivers/common/cnxk/roc_nix_inl_dev.c | 5 +++--
drivers/common/cnxk/roc_nix_inl_priv.h | 1 +
3 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index bfb33b1..6c72248 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -208,7 +208,7 @@ roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix, bool inl_dev_sa)
uintptr_t
roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix, bool inb_inl_dev, uint32_t spi)
{
- uint32_t max_spi, min_spi, mask;
+ uint32_t max_spi = 0, min_spi = 0, mask;
uintptr_t sa_base;
uint64_t sz;
@@ -461,7 +461,7 @@ roc_nix_inl_outb_init(struct roc_nix *roc_nix)
nix->outb_se_ring_base =
roc_nix->port_id * ROC_NIX_SOFT_EXP_PER_PORT_MAX_RINGS;
- if (inl_dev == NULL) {
+ if (inl_dev == NULL || !inl_dev->set_soft_exp_poll) {
nix->outb_se_ring_cnt = 0;
return 0;
}
@@ -537,11 +537,12 @@ roc_nix_inl_outb_fini(struct roc_nix *roc_nix)
plt_free(nix->outb_sa_base);
nix->outb_sa_base = NULL;
- if (idev && idev->nix_inl_dev) {
+ if (idev && idev->nix_inl_dev && nix->outb_se_ring_cnt) {
inl_dev = idev->nix_inl_dev;
ring_base = inl_dev->sa_soft_exp_ring;
+ ring_base += nix->outb_se_ring_base;
- for (i = 0; i < ROC_NIX_INL_MAX_SOFT_EXP_RNGS; i++) {
+ for (i = 0; i < nix->outb_se_ring_cnt; i++) {
if (ring_base[i])
plt_free(PLT_PTR_CAST(ring_base[i]));
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index 51f1f68..5e61a42 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -814,6 +814,7 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
inl_dev->wqe_skip = roc_inl_dev->wqe_skip;
inl_dev->spb_drop_pc = NIX_AURA_DROP_PC_DFLT;
inl_dev->lpb_drop_pc = NIX_AURA_DROP_PC_DFLT;
+ inl_dev->set_soft_exp_poll = roc_inl_dev->set_soft_exp_poll;
if (roc_inl_dev->spb_drop_pc)
inl_dev->spb_drop_pc = roc_inl_dev->spb_drop_pc;
@@ -849,7 +850,7 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
if (rc)
goto sso_release;
- if (roc_inl_dev->set_soft_exp_poll) {
+ if (inl_dev->set_soft_exp_poll) {
rc = nix_inl_outb_poll_thread_setup(inl_dev);
if (rc)
goto cpt_release;
@@ -898,7 +899,7 @@ roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev)
inl_dev = idev->nix_inl_dev;
pci_dev = inl_dev->pci_dev;
- if (roc_inl_dev->set_soft_exp_poll) {
+ if (inl_dev->set_soft_exp_poll) {
soft_exp_poll_thread_exit = true;
pthread_join(inl_dev->soft_exp_poll_thread, NULL);
plt_bitmap_free(inl_dev->soft_exp_ring_bmap);
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index f9646a3..1ab8470 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -59,6 +59,7 @@ struct nix_inl_dev {
pthread_t soft_exp_poll_thread;
uint32_t soft_exp_poll_freq;
uint64_t *sa_soft_exp_ring;
+ bool set_soft_exp_poll;
/* Soft expiry ring bitmap */
struct plt_bitmap *soft_exp_ring_bmap;
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 08/28] common/cnxk: convert warning to debug print
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (5 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 07/28] common/cnxk: fix issues in soft expiry disable path Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 09/28] common/cnxk: use aggregate level rr prio from mbox Nithin Dabilpuram
` (19 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal
From: Akhil Goyal <gakhil@marvell.com>
Inbound SA SPI if not in min-max range specified in devargs,
was marked as a warning. But this is not converted to debug
print because if the entry is found to be duplicate in the mask,
it will give another error print. Hence, warning print is not needed
and is now converted to debug print.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 6c72248..2c013cb 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -221,7 +221,7 @@ roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix, bool inb_inl_dev, uint32_t spi)
mask = roc_nix_inl_inb_spi_range(roc_nix, inb_inl_dev, &min_spi,
&max_spi);
if (spi > max_spi || spi < min_spi)
- plt_warn("Inbound SA SPI %u not in range (%u..%u)", spi,
+ plt_nix_dbg("Inbound SA SPI %u not in range (%u..%u)", spi,
min_spi, max_spi);
/* Get SA size */
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 09/28] common/cnxk: use aggregate level rr prio from mbox
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (6 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 08/28] common/cnxk: convert warning to debug print Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 10/28] net/cnxk: support loopback mode on AF VF's Nithin Dabilpuram
` (18 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Use aggregate level Round Robin Priority from mbox response instead of
fixing it to single macro. This is useful when kernel AF driver
changes the constant.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix_priv.h | 5 +++--
drivers/common/cnxk/roc_nix_tm.c | 3 ++-
drivers/common/cnxk/roc_nix_tm_utils.c | 8 ++++----
3 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 9b9ffae..cc69d71 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -181,6 +181,7 @@ struct nix {
uint16_t tm_root_lvl;
uint16_t tm_flags;
uint16_t tm_link_cfg_lvl;
+ uint8_t tm_aggr_lvl_rr_prio;
uint16_t contig_rsvd[NIX_TXSCH_LVL_CNT];
uint16_t discontig_rsvd[NIX_TXSCH_LVL_CNT];
uint64_t tm_markfmt_en;
@@ -284,7 +285,6 @@ void nix_unregister_irqs(struct nix *nix);
/* Default TL1 priority and Quantum from AF */
#define NIX_TM_TL1_DFLT_RR_QTM ((1 << 24) - 1)
-#define NIX_TM_TL1_DFLT_RR_PRIO 1
struct nix_tm_shaper_data {
uint64_t burst_exponent;
@@ -432,7 +432,8 @@ bool nix_tm_child_res_valid(struct nix_tm_node_list *list,
struct nix_tm_node *parent);
uint16_t nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig,
uint16_t *schq, enum roc_nix_tm_tree tree);
-uint8_t nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg,
+uint8_t nix_tm_tl1_default_prep(struct nix *nix, uint32_t schq,
+ volatile uint64_t *reg,
volatile uint64_t *regval);
uint8_t nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
volatile uint64_t *reg,
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 42d3abd..7fd54ef 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -55,7 +55,7 @@ nix_tm_node_reg_conf(struct nix *nix, struct nix_tm_node *node)
req = mbox_alloc_msg_nix_txschq_cfg(mbox);
req->lvl = NIX_TXSCH_LVL_TL1;
- k = nix_tm_tl1_default_prep(node->parent_hw_id, req->reg,
+ k = nix_tm_tl1_default_prep(nix, node->parent_hw_id, req->reg,
req->regval);
req->num_regs = k;
rc = mbox_process(mbox);
@@ -1288,6 +1288,7 @@ nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree)
} while (pend);
nix->tm_link_cfg_lvl = rsp->link_cfg_lvl;
+ nix->tm_aggr_lvl_rr_prio = rsp->aggr_lvl_rr_prio;
return 0;
alloc_err:
for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index bcdf990..b9b605f 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -478,7 +478,7 @@ nix_tm_child_res_valid(struct nix_tm_node_list *list,
}
uint8_t
-nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg,
+nix_tm_tl1_default_prep(struct nix *nix, uint32_t schq, volatile uint64_t *reg,
volatile uint64_t *regval)
{
uint8_t k = 0;
@@ -496,7 +496,7 @@ nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg,
k++;
reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- regval[k] = (NIX_TM_TL1_DFLT_RR_PRIO << 1);
+ regval[k] = (nix->tm_aggr_lvl_rr_prio << 1);
k++;
reg[k] = NIX_AF_TL1X_CIR(schq);
@@ -540,7 +540,7 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
* Static Priority is disabled
*/
if (hw_lvl == NIX_TXSCH_LVL_TL1 && nix->tm_flags & NIX_TM_TL1_NO_SP) {
- rr_prio = NIX_TM_TL1_DFLT_RR_PRIO;
+ rr_prio = nix->tm_aggr_lvl_rr_prio;
child = 0;
}
@@ -662,7 +662,7 @@ nix_tm_sched_reg_prep(struct nix *nix, struct nix_tm_node *node,
*/
if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
(!nix_tm_have_tl1_access(nix) || nix->tm_flags & NIX_TM_TL1_NO_SP))
- strict_prio = NIX_TM_TL1_DFLT_RR_PRIO;
+ strict_prio = nix->tm_aggr_lvl_rr_prio;
plt_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
"prio 0x%" PRIx64 ", rr_quantum/rr_wt 0x%" PRIx64 " (%p)",
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 10/28] net/cnxk: support loopback mode on AF VF's
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (7 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 09/28] common/cnxk: use aggregate level rr prio from mbox Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 11/28] net/cnxk: update LBK ethdev link info Nithin Dabilpuram
` (17 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Support internal loopback mode on AF VF's using RoC by setting
Tx channel same as Rx channel.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index bd31a9a..e1b1e16 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1119,6 +1119,9 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
nb_txq = RTE_MAX(data->nb_tx_queues, 1);
+ if (roc_nix_is_lbk(nix))
+ nix->enable_loop = eth_dev->data->dev_conf.lpbk_mode;
+
/* Alloc a nix lf */
rc = roc_nix_lf_alloc(nix, nb_rxq, nb_txq, rx_cfg);
if (rc) {
@@ -1242,6 +1245,9 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
}
}
+ if (roc_nix_is_lbk(nix))
+ goto skip_lbk_setup;
+
/* Configure loop back mode */
rc = roc_nix_mac_loopback_enable(nix,
eth_dev->data->dev_conf.lpbk_mode);
@@ -1250,6 +1256,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto cq_fini;
}
+skip_lbk_setup:
/* Setup Inline security support */
rc = nix_security_setup(dev);
if (rc)
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 11/28] net/cnxk: update LBK ethdev link info
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (8 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 10/28] net/cnxk: support loopback mode on AF VF's Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 12/28] net/cnxk: add barrier after meta batch free in scalar Nithin Dabilpuram
` (16 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Update link info of LBK ethdev i.e AF's VF's as always up
and 100G. This is because there is no phy for the LBK interfaces
and we won't get a link update notification for the same.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cnxk_link.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index f10a502..b1d59e3 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -12,6 +12,17 @@ cnxk_nix_toggle_flag_link_cfg(struct cnxk_eth_dev *dev, bool set)
else
dev->flags &= ~CNXK_LINK_CFG_IN_PROGRESS_F;
+ /* Update link info for LBK */
+ if (!set && roc_nix_is_lbk(&dev->nix)) {
+ struct rte_eth_link link;
+
+ link.link_status = RTE_ETH_LINK_UP;
+ link.link_speed = RTE_ETH_SPEED_NUM_100G;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ rte_eth_linkstatus_set(dev->eth_dev, &link);
+ }
+
rte_wmb();
}
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 12/28] net/cnxk: add barrier after meta batch free in scalar
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (9 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 11/28] net/cnxk: update LBK ethdev link info Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 13/28] net/cnxk: disable default inner chksum for outb inline Nithin Dabilpuram
` (15 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, stable
Add barrier after meta batch free in scalar routine when
lmt lines are exactly full to make sure that next LMT line user
in Tx only starts writing the lines only when previous stoerl's
are complete.
Fixes: 4382a7ccf781 ("net/cnxk: support Rx security offload on cn10k")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_rx.h | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index e4f5a55..94c1f1e 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -1007,10 +1007,11 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
plt_write64((wdata | nb_pkts), rxq->cq_door);
/* Free remaining meta buffers if any */
- if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+ if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff)
nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
- plt_io_wmb();
- }
+
+ if (flags & NIX_RX_OFFLOAD_SECURITY_F)
+ rte_io_wmb();
return nb_pkts;
}
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 13/28] net/cnxk: disable default inner chksum for outb inline
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (10 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 12/28] net/cnxk: add barrier after meta batch free in scalar Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 14/28] net/cnxk: fix roundup size with transport mode Nithin Dabilpuram
` (14 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Disable default inner L3/L4 checksum generation for outbound inline
path and enable based on SA options or RTE_MBUF flags as per
the spec. Though the checksum generation is not impacting much
performance, it is overwriting zero checksum for UDP packets
which is not always good.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.h | 4 +++-
drivers/net/cnxk/cn10k_ethdev_sec.c | 3 +++
drivers/net/cnxk/cn10k_tx.h | 44 ++++++++++++++++++++++++++++++-------
3 files changed, 42 insertions(+), 9 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 1e49d65..9642d6a 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -71,7 +71,9 @@ struct cn10k_sec_sess_priv {
uint8_t mode : 1;
uint8_t roundup_byte : 5;
uint8_t roundup_len;
- uint16_t partial_len;
+ uint16_t partial_len : 10;
+ uint16_t chksum : 2;
+ uint16_t rsvd : 4;
};
uint64_t u64;
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 87bb691..b307215 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -552,6 +552,9 @@ cn10k_eth_sec_session_create(void *device,
sess_priv.partial_len = rlens->partial_len;
sess_priv.mode = outb_sa_dptr->w2.s.ipsec_mode;
sess_priv.outer_ip_ver = outb_sa_dptr->w2.s.outer_ip_ver;
+ /* Propagate inner checksum enable from SA to fast path */
+ sess_priv.chksum = (!ipsec->options.ip_csum_enable << 1 |
+ !ipsec->options.l4_csum_enable);
/* Pointer from eth_sec -> outb_sa */
eth_sec->sa = outb_sa;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index de88a21..981bc9b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -246,6 +246,7 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
{
struct cn10k_sec_sess_priv sess_priv;
uint32_t pkt_len, dlen_adj, rlen;
+ uint8_t l3l4type, chksum;
uint64x2_t cmd01, cmd23;
uintptr_t dptr, nixtx;
uint64_t ucode_cmd[4];
@@ -256,10 +257,23 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
sess_priv.u64 = *rte_security_dynfield(m);
- if (flags & NIX_TX_NEED_SEND_HDR_W1)
+ if (flags & NIX_TX_NEED_SEND_HDR_W1) {
l2_len = vgetq_lane_u8(*cmd0, 8);
- else
+ /* Extract l3l4type either from il3il4type or ol3ol4type */
+ if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F &&
+ flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)
+ l3l4type = vgetq_lane_u8(*cmd0, 13);
+ else
+ l3l4type = vgetq_lane_u8(*cmd0, 12);
+
+ chksum = (l3l4type & 0x1) << 1 | !!(l3l4type & 0x30);
+ chksum = ~chksum;
+ sess_priv.chksum = sess_priv.chksum & chksum;
+ /* Clear SEND header flags */
+ *cmd0 = vsetq_lane_u16(0, *cmd0, 6);
+ } else {
l2_len = m->l2_len;
+ }
/* Retrieve DPTR */
dptr = vgetq_lane_u64(*cmd1, 1);
@@ -291,8 +305,8 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
sa_base &= ~0xFFFFUL;
sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
- ucode_cmd[0] =
- (ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+ ucode_cmd[0] = (ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
+ ((uint64_t)sess_priv.chksum) << 32 | pkt_len);
/* CPT Word 0 and Word 1 */
cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
@@ -343,6 +357,7 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
struct cn10k_sec_sess_priv sess_priv;
uint32_t pkt_len, dlen_adj, rlen;
struct nix_send_hdr_s *send_hdr;
+ uint8_t l3l4type, chksum;
uint64x2_t cmd01, cmd23;
union nix_send_sg_s *sg;
uintptr_t dptr, nixtx;
@@ -360,10 +375,23 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
else
sg = (union nix_send_sg_s *)&cmd[2];
- if (flags & NIX_TX_NEED_SEND_HDR_W1)
+ if (flags & NIX_TX_NEED_SEND_HDR_W1) {
l2_len = cmd[1] & 0xFF;
- else
+ /* Extract l3l4type either from il3il4type or ol3ol4type */
+ if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F &&
+ flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)
+ l3l4type = (cmd[1] >> 40) & 0xFF;
+ else
+ l3l4type = (cmd[1] >> 32) & 0xFF;
+
+ chksum = (l3l4type & 0x1) << 1 | !!(l3l4type & 0x30);
+ chksum = ~chksum;
+ sess_priv.chksum = sess_priv.chksum & chksum;
+ /* Clear SEND header flags */
+ cmd[1] &= ~(0xFFFFUL << 32);
+ } else {
l2_len = m->l2_len;
+ }
/* Retrieve DPTR */
dptr = *(uint64_t *)(sg + 1);
@@ -395,8 +423,8 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
sa_base &= ~0xFFFFUL;
sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
- ucode_cmd[0] =
- (ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+ ucode_cmd[0] = (ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
+ ((uint64_t)sess_priv.chksum) << 32 | pkt_len);
/* CPT Word 0 and Word 1. Assume no multi-seg support */
cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 14/28] net/cnxk: fix roundup size with transport mode
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (11 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 13/28] net/cnxk: disable default inner chksum for outb inline Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 15/28] net/cnxk: update inline device in ethdev telemetry Nithin Dabilpuram
` (13 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, stable
For transport mode, roundup needs to be based on L4 data
and shouldn't include L3 length.
By including l3 length, rlen that is calculated and put in
send hdr would cross the final length of the packet in some
scenarios where padding is necessary.
Also when outer and inner checksum offload flags are enabled,
get the l2_len and l3_len from il3ptr and il4ptr.
Fixes: 55bfac717c72 ("net/cnxk: support Tx security offload on cn10k")
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_tx.h | 34 ++++++++++++++++++++++++++--------
1 file changed, 26 insertions(+), 8 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 981bc9b..c25825c 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -248,23 +248,29 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
uint32_t pkt_len, dlen_adj, rlen;
uint8_t l3l4type, chksum;
uint64x2_t cmd01, cmd23;
+ uint8_t l2_len, l3_len;
uintptr_t dptr, nixtx;
uint64_t ucode_cmd[4];
uint64_t *laddr;
- uint8_t l2_len;
uint16_t tag;
uint64_t sa;
sess_priv.u64 = *rte_security_dynfield(m);
if (flags & NIX_TX_NEED_SEND_HDR_W1) {
- l2_len = vgetq_lane_u8(*cmd0, 8);
/* Extract l3l4type either from il3il4type or ol3ol4type */
if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F &&
- flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)
+ flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
+ l2_len = vgetq_lane_u8(*cmd0, 10);
+ /* L4 ptr from send hdr includes l2 and l3 len */
+ l3_len = vgetq_lane_u8(*cmd0, 11) - l2_len;
l3l4type = vgetq_lane_u8(*cmd0, 13);
- else
+ } else {
+ l2_len = vgetq_lane_u8(*cmd0, 8);
+ /* L4 ptr from send hdr includes l2 and l3 len */
+ l3_len = vgetq_lane_u8(*cmd0, 9) - l2_len;
l3l4type = vgetq_lane_u8(*cmd0, 12);
+ }
chksum = (l3l4type & 0x1) << 1 | !!(l3l4type & 0x30);
chksum = ~chksum;
@@ -273,6 +279,7 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
*cmd0 = vsetq_lane_u16(0, *cmd0, 6);
} else {
l2_len = m->l2_len;
+ l3_len = m->l3_len;
}
/* Retrieve DPTR */
@@ -281,6 +288,8 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
/* Calculate dlen adj */
dlen_adj = pkt_len - l2_len;
+ /* Exclude l3 len from roundup for transport mode */
+ dlen_adj -= sess_priv.mode ? 0 : l3_len;
rlen = (dlen_adj + sess_priv.roundup_len) +
(sess_priv.roundup_byte - 1);
rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
@@ -360,10 +369,10 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
uint8_t l3l4type, chksum;
uint64x2_t cmd01, cmd23;
union nix_send_sg_s *sg;
+ uint8_t l2_len, l3_len;
uintptr_t dptr, nixtx;
uint64_t ucode_cmd[4];
uint64_t *laddr;
- uint8_t l2_len;
uint16_t tag;
uint64_t sa;
@@ -376,13 +385,19 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
sg = (union nix_send_sg_s *)&cmd[2];
if (flags & NIX_TX_NEED_SEND_HDR_W1) {
- l2_len = cmd[1] & 0xFF;
/* Extract l3l4type either from il3il4type or ol3ol4type */
if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F &&
- flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)
+ flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
+ l2_len = (cmd[1] >> 16) & 0xFF;
+ /* L4 ptr from send hdr includes l2 and l3 len */
+ l3_len = ((cmd[1] >> 24) & 0xFF) - l2_len;
l3l4type = (cmd[1] >> 40) & 0xFF;
- else
+ } else {
+ l2_len = cmd[1] & 0xFF;
+ /* L4 ptr from send hdr includes l2 and l3 len */
+ l3_len = ((cmd[1] >> 8) & 0xFF) - l2_len;
l3l4type = (cmd[1] >> 32) & 0xFF;
+ }
chksum = (l3l4type & 0x1) << 1 | !!(l3l4type & 0x30);
chksum = ~chksum;
@@ -391,6 +406,7 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
cmd[1] &= ~(0xFFFFUL << 32);
} else {
l2_len = m->l2_len;
+ l3_len = m->l3_len;
}
/* Retrieve DPTR */
@@ -399,6 +415,8 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
/* Calculate dlen adj */
dlen_adj = pkt_len - l2_len;
+ /* Exclude l3 len from roundup for transport mode */
+ dlen_adj -= sess_priv.mode ? 0 : l3_len;
rlen = (dlen_adj + sess_priv.roundup_len) +
(sess_priv.roundup_byte - 1);
rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 15/28] net/cnxk: update inline device in ethdev telemetry
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (12 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 14/28] net/cnxk: fix roundup size with transport mode Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 16/28] net/cnxk: change env for debug IV Nithin Dabilpuram
` (12 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Rakesh Kudurumalla
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
inline pf func is updated in ethdev_tel_handle_info
when inline device is attached to any dpdk process
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev_telemetry.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/cnxk/cnxk_ethdev_telemetry.c b/drivers/net/cnxk/cnxk_ethdev_telemetry.c
index 83bc658..b76dbdf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_telemetry.c
+++ b/drivers/net/cnxk/cnxk_ethdev_telemetry.c
@@ -23,6 +23,7 @@ ethdev_tel_handle_info(const char *cmd __rte_unused,
struct eth_info_s {
/** PF/VF information */
uint16_t pf_func;
+ uint16_t inl_dev_pf_func;
uint8_t max_mac_entries;
bool dmac_filter_ena;
uint8_t dmac_filter_count;
@@ -62,6 +63,8 @@ ethdev_tel_handle_info(const char *cmd __rte_unused,
info = ð_info.info;
dev = cnxk_eth_pmd_priv(eth_dev);
if (dev) {
+ info->inl_dev_pf_func =
+ roc_nix_inl_dev_pffunc_get();
info->pf_func = roc_nix_get_pf_func(&dev->nix);
info->max_mac_entries = dev->max_mac_entries;
info->dmac_filter_ena = dev->dmac_filter_enable;
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 16/28] net/cnxk: change env for debug IV
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (13 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 15/28] net/cnxk: update inline device in ethdev telemetry Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 17/28] net/cnxk: reset offload flag if reassembly is disabled Nithin Dabilpuram
` (11 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal
From: Akhil Goyal <gakhil@marvell.com>
Changed environment variable name for specifying
debug IV for unit testing of inline IPsec offload
with known test vectors.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index b307215..60b7093 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -522,10 +522,11 @@ cn10k_eth_sec_session_create(void *device,
goto mempool_put;
}
- iv_str = getenv("CN10K_ETH_SEC_IV_OVR");
- if (iv_str)
- outb_dbg_iv_update(outb_sa_dptr, iv_str);
-
+ if (conf->ipsec.options.iv_gen_disable == 1) {
+ iv_str = getenv("ETH_SEC_IV_OVR");
+ if (iv_str)
+ outb_dbg_iv_update(outb_sa_dptr, iv_str);
+ }
/* Fill outbound sa misc params */
rc = cn10k_eth_sec_outb_sa_misc_fill(&dev->nix, outb_sa_dptr,
outb_sa, ipsec, sa_idx);
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 17/28] net/cnxk: reset offload flag if reassembly is disabled
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (14 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 16/28] net/cnxk: change env for debug IV Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 18/28] net/cnxk: support decrement TTL for inline IPsec Nithin Dabilpuram
` (10 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal
From: Akhil Goyal <gakhil@marvell.com>
The rx offload flag need to be reset if IP reassembly flag
is not set while calling reassembly_conf_set.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index b5f3c83..d04b9eb 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -547,6 +547,12 @@ cn10k_nix_reassembly_conf_set(struct rte_eth_dev *eth_dev,
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
int rc = 0;
+ if (!conf->flags) {
+ /* Clear offload flags on disable */
+ dev->rx_offload_flags &= ~NIX_RX_REAS_F;
+ return 0;
+ }
+
rc = roc_nix_reassembly_configure(conf->timeout_ms,
conf->max_frags);
if (!rc && dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 18/28] net/cnxk: support decrement TTL for inline IPsec
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (15 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 17/28] net/cnxk: reset offload flag if reassembly is disabled Nithin Dabilpuram
@ 2022-04-22 10:46 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 19/28] net/cnxk: optimize Rx fast path for security pkts Nithin Dabilpuram
` (9 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:46 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal
From: Akhil Goyal <gakhil@marvell.com>
Added support for decrementing TTL(IPv4)/hoplimit(IPv6)
while doing inline IPsec processing if the security session
sa options is enabled with dec_ttl.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.h | 3 ++-
drivers/net/cnxk/cn10k_ethdev_sec.c | 1 +
drivers/net/cnxk/cn10k_tx.h | 6 ++++--
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 9642d6a..c8666ce 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -73,7 +73,8 @@ struct cn10k_sec_sess_priv {
uint8_t roundup_len;
uint16_t partial_len : 10;
uint16_t chksum : 2;
- uint16_t rsvd : 4;
+ uint16_t dec_ttl : 1;
+ uint16_t rsvd : 3;
};
uint64_t u64;
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 60b7093..f32e169 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -556,6 +556,7 @@ cn10k_eth_sec_session_create(void *device,
/* Propagate inner checksum enable from SA to fast path */
sess_priv.chksum = (!ipsec->options.ip_csum_enable << 1 |
!ipsec->options.l4_csum_enable);
+ sess_priv.dec_ttl = ipsec->options.dec_ttl;
/* Pointer from eth_sec -> outb_sa */
eth_sec->sa = outb_sa;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c25825c..c482352 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -315,7 +315,8 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
ucode_cmd[0] = (ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
- ((uint64_t)sess_priv.chksum) << 32 | pkt_len);
+ ((uint64_t)sess_priv.chksum) << 32 |
+ ((uint64_t)sess_priv.dec_ttl) << 34 | pkt_len);
/* CPT Word 0 and Word 1 */
cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
@@ -442,7 +443,8 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
ucode_cmd[0] = (ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
- ((uint64_t)sess_priv.chksum) << 32 | pkt_len);
+ ((uint64_t)sess_priv.chksum) << 32 |
+ ((uint64_t)sess_priv.dec_ttl) << 34 | pkt_len);
/* CPT Word 0 and Word 1. Assume no multi-seg support */
cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 19/28] net/cnxk: optimize Rx fast path for security pkts
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (16 preceding siblings ...)
2022-04-22 10:46 ` [PATCH v2 18/28] net/cnxk: support decrement TTL for inline IPsec Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 20/28] net/cnxk: update olflags with L3/L4 csum offload Nithin Dabilpuram
` (8 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Optimize Rx fast path for security pkts by preprocessing
most of the operations such as sa pointer compute,
inner wqe pointer fetch and ucode completion translation
before the pkt is characterized as inbound inline pkt.
Preprocessed info will be discarded if pkt is not
found to be security pkt. Also fix fetching of CQ word5
for vector mode. Get ucode completion code from CPT parse
header and RLEN from IP4v/IPv6 decrypted packet as it is
in same 64B cacheline as CPT parse header in most of
the cases. By this method, we avoid accessing an extra
cacheline
Fixes: c062f5726f61 ("net/cnxk: support IP reassembly")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_rx.h | 488 +++++++++++++++++++++++++++-----------------
1 file changed, 306 insertions(+), 182 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 94c1f1e..14b634e 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -341,6 +341,9 @@ nix_sec_reassemble_frags(const struct cpt_parse_hdr_s *hdr, uint64_t cq_w1,
mbuf->data_len = frag_size;
fragx_sum += frag_size;
+ /* Mark frag as get */
+ RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
+
/* Frag-2: */
if (hdr->w0.num_frags > 2) {
frag_ptr = (uint64_t *)(finfo + 1);
@@ -354,6 +357,9 @@ nix_sec_reassemble_frags(const struct cpt_parse_hdr_s *hdr, uint64_t cq_w1,
*(uint64_t *)(&mbuf->rearm_data) = mbuf_init | data_off;
mbuf->data_len = frag_size;
fragx_sum += frag_size;
+
+ /* Mark frag as get */
+ RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
}
/* Frag-3: */
@@ -368,6 +374,9 @@ nix_sec_reassemble_frags(const struct cpt_parse_hdr_s *hdr, uint64_t cq_w1,
*(uint64_t *)(&mbuf->rearm_data) = mbuf_init | data_off;
mbuf->data_len = frag_size;
fragx_sum += frag_size;
+
+ /* Mark frag as get */
+ RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
}
if (inner_rx->lctype == NPC_LT_LC_IP) {
@@ -413,10 +422,10 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
struct cn10k_inb_priv_data *inb_priv;
struct rte_mbuf *inner = NULL;
- uint64_t res_w1;
uint32_t sa_idx;
- uint16_t uc_cc;
+ uint16_t ucc;
uint32_t len;
+ uintptr_t ip;
void *inb_sa;
uint64_t w0;
@@ -438,20 +447,23 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
*rte_security_dynfield(inner) =
(uint64_t)inb_priv->userdata;
- /* CPT result(struct cpt_cn10k_res_s) is at
- * after first IOVA in meta
+ /* Get ucc from cpt parse header */
+ ucc = hdr->w3.hw_ccode;
+
+ /* Calculate inner packet length as
+ * IP total len + l2 len
*/
- res_w1 = *((uint64_t *)(&inner[1]) + 10);
- uc_cc = res_w1 & 0xFF;
+ ip = (uintptr_t)hdr + ((cq_w5 >> 16) & 0xFF);
+ ip += ((cq_w1 >> 40) & 0x6);
+ len = rte_be_to_cpu_16(*(uint16_t *)ip);
+ len += ((cq_w5 >> 16) & 0xFF) - (cq_w5 & 0xFF);
+ len += (cq_w1 & BIT(42)) ? 40 : 0;
- /* Calculate inner packet length */
- len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
- sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
inner->pkt_len = len;
inner->data_len = len;
*(uint64_t *)(&inner->rearm_data) = mbuf_init;
- inner->ol_flags = ((uc_cc == CPT_COMP_WARN) ?
+ inner->ol_flags = ((ucc == CPT_COMP_WARN) ?
RTE_MBUF_F_RX_SEC_OFFLOAD :
(RTE_MBUF_F_RX_SEC_OFFLOAD |
RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
@@ -477,6 +489,12 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
*loff = *loff + 1;
+ /* Mark meta mbuf as put */
+ RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0);
+
+ /* Mark inner mbuf as get */
+ RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1);
+
return inner;
} else if (cq_w1 & BIT(11)) {
inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
@@ -492,22 +510,21 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
/* Update dynamic field with userdata */
*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
- /* Update l2 hdr length first */
+ /* Get ucc from cpt parse header */
+ ucc = hdr->w3.hw_ccode;
- /* CPT result(struct cpt_cn10k_res_s) is at
- * after first IOVA in meta
- */
- res_w1 = *((uint64_t *)(&inner[1]) + 10);
- uc_cc = res_w1 & 0xFF;
+ /* Calculate inner packet length as IP total len + l2 len */
+ ip = (uintptr_t)hdr + ((cq_w5 >> 16) & 0xFF);
+ ip += ((cq_w1 >> 40) & 0x6);
+ len = rte_be_to_cpu_16(*(uint16_t *)ip);
+ len += ((cq_w5 >> 16) & 0xFF) - (cq_w5 & 0xFF);
+ len += (cq_w1 & BIT(42)) ? 40 : 0;
- /* Calculate inner packet length */
- len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
- sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
inner->pkt_len = len;
inner->data_len = len;
*(uint64_t *)(&inner->rearm_data) = mbuf_init;
- inner->ol_flags = ((uc_cc == CPT_COMP_WARN) ?
+ inner->ol_flags = ((ucc == CPT_COMP_WARN) ?
RTE_MBUF_F_RX_SEC_OFFLOAD :
(RTE_MBUF_F_RX_SEC_OFFLOAD |
RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
@@ -532,83 +549,34 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
#if defined(RTE_ARCH_ARM64)
-static __rte_always_inline struct rte_mbuf *
-nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t sa_base,
- uintptr_t laddr, uint8_t *loff, struct rte_mbuf *mbuf,
- uint16_t data_off, uint8x16_t *rx_desc_field1,
- uint64_t *ol_flags, const uint16_t flags,
- uint64x2_t *rearm)
+static __rte_always_inline void
+nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t inb_sa,
+ uintptr_t cpth, struct rte_mbuf *inner,
+ uint8x16_t *rx_desc_field1, uint64_t *ol_flags,
+ const uint16_t flags, uint64x2_t *rearm)
{
- const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
- const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+ const struct cpt_parse_hdr_s *hdr =
+ (const struct cpt_parse_hdr_s *)cpth;
uint64_t mbuf_init = vgetq_lane_u64(*rearm, 0);
struct cn10k_inb_priv_data *inb_priv;
- struct rte_mbuf *inner;
- uint64_t *sg, res_w1;
- uint32_t sa_idx;
- void *inb_sa;
- uint16_t len;
- uint64_t w0;
- if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11))) {
- w0 = hdr->w0.u64;
- sa_idx = w0 >> 32;
+ /* Clear checksum flags */
+ *ol_flags &= ~(RTE_MBUF_F_RX_L4_CKSUM_MASK |
+ RTE_MBUF_F_RX_IP_CKSUM_MASK);
- /* Get SPI from CPT_PARSE_S's cookie(already swapped) */
- w0 = hdr->w0.u64;
- sa_idx = w0 >> 32;
+ /* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+ inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd((void *)inb_sa);
- inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
- inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+ /* Update dynamic field with userdata */
+ *rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
- /* Clear checksum flags */
- *ol_flags &= ~(RTE_MBUF_F_RX_L4_CKSUM_MASK |
- RTE_MBUF_F_RX_IP_CKSUM_MASK);
+ /* Mark inner mbuf as get */
+ RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1);
- if (!hdr->w0.num_frags) {
- /* No Reassembly or inbound error */
- inner = (struct rte_mbuf *)
- (rte_be_to_cpu_64(hdr->wqe_ptr) -
- sizeof(struct rte_mbuf));
- /* Update dynamic field with userdata */
- *rte_security_dynfield(inner) =
- (uint64_t)inb_priv->userdata;
-
- /* CPT result(struct cpt_cn10k_res_s) is at
- * after first IOVA in meta
- */
- sg = (uint64_t *)(inner + 1);
- res_w1 = sg[10];
-
- /* Clear checksum flags and update security flag */
- *ol_flags &= ~(RTE_MBUF_F_RX_L4_CKSUM_MASK |
- RTE_MBUF_F_RX_IP_CKSUM_MASK);
- *ol_flags |=
- (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
- RTE_MBUF_F_RX_SEC_OFFLOAD :
- (RTE_MBUF_F_RX_SEC_OFFLOAD |
- RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
- /* Calculate inner packet length */
- len = ((res_w1 >> 16) & 0xFFFF) +
- hdr->w2.il3_off -
- sizeof(struct cpt_parse_hdr_s) -
- (w0 & 0x7);
- /* Update pkt_len and data_len */
- *rx_desc_field1 =
- vsetq_lane_u16(len, *rx_desc_field1, 2);
- *rx_desc_field1 =
- vsetq_lane_u16(len, *rx_desc_field1, 4);
-
- } else if (!(hdr->w0.err_sum) && !(hdr->w0.reas_sts)) {
+ if (flags & NIX_RX_REAS_F && hdr->w0.num_frags) {
+ if (!(hdr->w0.err_sum) && !(hdr->w0.reas_sts)) {
/* Reassembly success */
- inner = nix_sec_reassemble_frags(hdr, cq_w1, cq_w5,
- mbuf_init);
- sg = (uint64_t *)(inner + 1);
- res_w1 = sg[10];
-
- /* Update dynamic field with userdata */
- *rte_security_dynfield(inner) =
- (uint64_t)inb_priv->userdata;
+ nix_sec_reassemble_frags(hdr, cq_w1, cq_w5, mbuf_init);
/* Assume success */
*ol_flags |= RTE_MBUF_F_RX_SEC_OFFLOAD;
@@ -624,7 +592,7 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t sa_base,
*rearm = vsetq_lane_u64(mbuf_init, *rearm, 0);
} else {
/* Reassembly failure */
- inner = nix_sec_attach_frags(hdr, inb_priv, mbuf_init);
+ nix_sec_attach_frags(hdr, inb_priv, mbuf_init);
*ol_flags |= inner->ol_flags;
/* Update pkt_len and data_len */
@@ -633,65 +601,7 @@ nix_sec_meta_to_mbuf(uint64_t cq_w1, uint64_t cq_w5, uintptr_t sa_base,
*rx_desc_field1 = vsetq_lane_u16(inner->data_len,
*rx_desc_field1, 4);
}
-
- /* Store meta in lmtline to free
- * Assume all meta's from same aura.
- */
- *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
- *loff = *loff + 1;
-
- /* Return inner mbuf */
- return inner;
-
- } else if (cq_w1 & BIT(11)) {
- inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
- sizeof(struct rte_mbuf));
- /* Get SPI from CPT_PARSE_S's cookie(already swapped) */
- w0 = hdr->w0.u64;
- sa_idx = w0 >> 32;
-
- inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
- inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
-
- /* Update dynamic field with userdata */
- *rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
-
- /* CPT result(struct cpt_cn10k_res_s) is at
- * after first IOVA in meta
- */
- sg = (uint64_t *)(inner + 1);
- res_w1 = sg[10];
-
- /* Clear checksum flags and update security flag */
- *ol_flags &= ~(RTE_MBUF_F_RX_L4_CKSUM_MASK | RTE_MBUF_F_RX_IP_CKSUM_MASK);
- *ol_flags |= (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
- RTE_MBUF_F_RX_SEC_OFFLOAD :
- (RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
- /* Calculate inner packet length */
- len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
- sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
- /* Update pkt_len and data_len */
- *rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 2);
- *rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 4);
-
- /* Store meta in lmtline to free
- * Assume all meta's from same aura.
- */
- *(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
- *loff = *loff + 1;
-
- /* Mark meta mbuf as put */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0);
-
- /* Mark inner mbuf as get */
- RTE_MEMPOOL_CHECK_COOKIES(inner->pool, (void **)&inner, 1, 1);
-
- /* Return inner mbuf */
- return inner;
}
-
- /* Return same mbuf as it is not a decrypted pkt */
- return mbuf;
}
#endif
@@ -1040,6 +950,14 @@ nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
return ol_flags;
}
+#define NIX_PUSH_META_TO_FREE(_mbuf, _laddr, _loff_p) \
+ do { \
+ *(uint64_t *)((_laddr) + (*(_loff_p) << 3)) = (uint64_t)_mbuf; \
+ *(_loff_p) = *(_loff_p) + 1; \
+ /* Mark meta mbuf as put */ \
+ RTE_MEMPOOL_CHECK_COOKIES(_mbuf->pool, (void **)&_mbuf, 1, 0); \
+ } while (0)
+
static __rte_always_inline uint16_t
cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
const uint16_t flags, void *lookup_mem,
@@ -1083,6 +1001,12 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
if (flags & NIX_RX_OFFLOAD_TSTAMP_F)
tstamp = rxq->tstamp;
+
+ cq0 = desc + CQE_SZ(head);
+ rte_prefetch0(CQE_PTR_OFF(cq0, 0, 64, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 1, 64, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 2, 64, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 3, 64, flags));
} else {
RTE_SET_USED(head);
}
@@ -1188,11 +1112,34 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
}
}
} else {
- if (pkts - packets > 4) {
- rte_prefetch_non_temporal(CQE_PTR_OFF(cq0, 4, 64, flags));
- rte_prefetch_non_temporal(CQE_PTR_OFF(cq0, 5, 64, flags));
- rte_prefetch_non_temporal(CQE_PTR_OFF(cq0, 6, 64, flags));
- rte_prefetch_non_temporal(CQE_PTR_OFF(cq0, 7, 64, flags));
+ if (flags & NIX_RX_OFFLOAD_SECURITY_F &&
+ pkts - packets > 4) {
+ /* Fetch cpt parse header */
+ void *p0 =
+ (void *)*CQE_PTR_OFF(cq0, 4, 72, flags);
+ void *p1 =
+ (void *)*CQE_PTR_OFF(cq0, 5, 72, flags);
+ void *p2 =
+ (void *)*CQE_PTR_OFF(cq0, 6, 72, flags);
+ void *p3 =
+ (void *)*CQE_PTR_OFF(cq0, 7, 72, flags);
+ rte_prefetch0(p0);
+ rte_prefetch0(p1);
+ rte_prefetch0(p2);
+ rte_prefetch0(p3);
+ }
+
+ if (pkts - packets > 8) {
+ if (flags) {
+ rte_prefetch0(CQE_PTR_OFF(cq0, 8, 0, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 9, 0, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 10, 0, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 11, 0, flags));
+ }
+ rte_prefetch0(CQE_PTR_OFF(cq0, 8, 64, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 9, 64, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 10, 64, flags));
+ rte_prefetch0(CQE_PTR_OFF(cq0, 11, 64, flags));
}
}
@@ -1237,13 +1184,6 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
}
- if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
- /* Prefetch probable CPT parse header area */
- rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf0, d_off));
- rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf1, d_off));
- rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf2, d_off));
- rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf3, d_off));
- }
/* Load CQE word0 and word 1 */
const uint64_t cq0_w0 = *CQE_PTR_OFF(cq0, 0, 0, flags);
@@ -1329,10 +1269,126 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
/* Translate meta to mbuf */
if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
- uint64_t cq0_w5 = *(uint64_t *)(cq0 + CQE_SZ(0) + 40);
- uint64_t cq1_w5 = *(uint64_t *)(cq0 + CQE_SZ(1) + 40);
- uint64_t cq2_w5 = *(uint64_t *)(cq0 + CQE_SZ(2) + 40);
- uint64_t cq3_w5 = *(uint64_t *)(cq0 + CQE_SZ(3) + 40);
+ uint64_t cq0_w5 = *CQE_PTR_OFF(cq0, 0, 40, flags);
+ uint64_t cq1_w5 = *CQE_PTR_OFF(cq0, 1, 40, flags);
+ uint64_t cq2_w5 = *CQE_PTR_OFF(cq0, 2, 40, flags);
+ uint64_t cq3_w5 = *CQE_PTR_OFF(cq0, 3, 40, flags);
+ uintptr_t cpth0 = (uintptr_t)mbuf0 + d_off;
+ uintptr_t cpth1 = (uintptr_t)mbuf1 + d_off;
+ uintptr_t cpth2 = (uintptr_t)mbuf2 + d_off;
+ uintptr_t cpth3 = (uintptr_t)mbuf3 + d_off;
+
+ uint64x2_t inner0, inner1, inner2, inner3;
+ uint64x2_t wqe01, wqe23, sa01, sa23;
+ uint16x4_t lens, l2lens, ltypes;
+ uint8x8_t ucc;
+
+ inner0 = vld1q_u64((const uint64_t *)cpth0);
+ inner1 = vld1q_u64((const uint64_t *)cpth1);
+ inner2 = vld1q_u64((const uint64_t *)cpth2);
+ inner3 = vld1q_u64((const uint64_t *)cpth3);
+
+ /* Extract and reverse wqe pointers */
+ wqe01 = vzip2q_u64(inner0, inner1);
+ wqe23 = vzip2q_u64(inner2, inner3);
+ wqe01 = vrev64q_u8(wqe01);
+ wqe23 = vrev64q_u8(wqe23);
+ /* Adjust wqe pointers to point to mbuf */
+ wqe01 = vsubq_u64(wqe01,
+ vdupq_n_u64(sizeof(struct rte_mbuf)));
+ wqe23 = vsubq_u64(wqe23,
+ vdupq_n_u64(sizeof(struct rte_mbuf)));
+
+ /* Extract sa idx from cookie area and add to sa_base */
+ sa01 = vzip1q_u64(inner0, inner1);
+ sa23 = vzip1q_u64(inner2, inner3);
+
+ sa01 = vshrq_n_u64(sa01, 32);
+ sa23 = vshrq_n_u64(sa23, 32);
+ sa01 = vshlq_n_u64(sa01,
+ ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2);
+ sa23 = vshlq_n_u64(sa23,
+ ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2);
+ sa01 = vaddq_u64(sa01, vdupq_n_u64(sa_base));
+ sa23 = vaddq_u64(sa23, vdupq_n_u64(sa_base));
+
+ const uint8x16_t tbl = {
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ /* HW_CCODE -> RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED */
+ 1, 0, 1, 1, 1, 1, 0, 1,
+ };
+
+ const int8x8_t err_off = {
+ /* UCC of significance starts from 0xF0 */
+ 0xF0,
+ /* Move HW_CCODE from 0:6 -> 8:14 */
+ -8,
+ 0xF0,
+ -8,
+ 0xF0,
+ -8,
+ 0xF0,
+ -8,
+ };
+
+ ucc = vdup_n_u8(0);
+ ucc = vset_lane_u16(*(uint16_t *)(cpth0 + 30), ucc, 0);
+ ucc = vset_lane_u16(*(uint16_t *)(cpth1 + 30), ucc, 1);
+ ucc = vset_lane_u16(*(uint16_t *)(cpth2 + 30), ucc, 2);
+ ucc = vset_lane_u16(*(uint16_t *)(cpth3 + 30), ucc, 3);
+ ucc = vsub_s8(ucc, err_off);
+ ucc = vqtbl1_u8(tbl, ucc);
+
+ RTE_BUILD_BUG_ON(NPC_LT_LC_IP != 2);
+ RTE_BUILD_BUG_ON(NPC_LT_LC_IP_OPT != 3);
+ RTE_BUILD_BUG_ON(NPC_LT_LC_IP6 != 4);
+ RTE_BUILD_BUG_ON(NPC_LT_LC_IP6_EXT != 5);
+
+ ltypes = vdup_n_u16(0);
+ ltypes = vset_lane_u16((cq0_w1 >> 40) & 0x6, ltypes, 0);
+ ltypes = vset_lane_u16((cq1_w1 >> 40) & 0x6, ltypes, 1);
+ ltypes = vset_lane_u16((cq2_w1 >> 40) & 0x6, ltypes, 2);
+ ltypes = vset_lane_u16((cq3_w1 >> 40) & 0x6, ltypes, 3);
+
+ /* Extract and reverse l3 length from IPv4/IPv6 hdr
+ * that is in same cacheline most probably as cpth.
+ */
+ cpth0 += ((cq0_w5 >> 16) & 0xFF) +
+ vget_lane_u16(ltypes, 0);
+ cpth1 += ((cq1_w5 >> 16) & 0xFF) +
+ vget_lane_u16(ltypes, 1);
+ cpth2 += ((cq2_w5 >> 16) & 0xFF) +
+ vget_lane_u16(ltypes, 2);
+ cpth3 += ((cq3_w5 >> 16) & 0xFF) +
+ vget_lane_u16(ltypes, 3);
+ lens = vdup_n_u16(0);
+ lens = vset_lane_u16(*(uint16_t *)cpth0, lens, 0);
+ lens = vset_lane_u16(*(uint16_t *)cpth1, lens, 1);
+ lens = vset_lane_u16(*(uint16_t *)cpth2, lens, 2);
+ lens = vset_lane_u16(*(uint16_t *)cpth3, lens, 3);
+ lens = vrev16_u8(lens);
+
+ /* Add l2 length to l3 lengths */
+ l2lens = vdup_n_u16(0);
+ l2lens = vset_lane_u16(((cq0_w5 >> 16) & 0xFF) -
+ (cq0_w5 & 0xFF),
+ l2lens, 0);
+ l2lens = vset_lane_u16(((cq1_w5 >> 16) & 0xFF) -
+ (cq1_w5 & 0xFF),
+ l2lens, 1);
+ l2lens = vset_lane_u16(((cq2_w5 >> 16) & 0xFF) -
+ (cq2_w5 & 0xFF),
+ l2lens, 2);
+ l2lens = vset_lane_u16(((cq3_w5 >> 16) & 0xFF) -
+ (cq3_w5 & 0xFF),
+ l2lens, 3);
+ lens = vadd_u16(lens, l2lens);
+
+ /* L3 header adjust */
+ const int8x8_t l3adj = {
+ 0, 0, 0, 0, 40, 0, 0, 0,
+ };
+ lens = vadd_u16(lens, vtbl1_u8(l3adj, ltypes));
/* Initialize rearm data when reassembly is enabled as
* data offset might change.
@@ -1345,25 +1401,93 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
}
/* Checksum ol_flags will be cleared if mbuf is meta */
- mbuf0 = nix_sec_meta_to_mbuf(cq0_w1, cq0_w5, sa_base, laddr,
- &loff, mbuf0, d_off, &f0,
- &ol_flags0, flags, &rearm0);
- mbuf01 = vsetq_lane_u64((uint64_t)mbuf0, mbuf01, 0);
-
- mbuf1 = nix_sec_meta_to_mbuf(cq1_w1, cq1_w5, sa_base, laddr,
- &loff, mbuf1, d_off, &f1,
- &ol_flags1, flags, &rearm1);
- mbuf01 = vsetq_lane_u64((uint64_t)mbuf1, mbuf01, 1);
-
- mbuf2 = nix_sec_meta_to_mbuf(cq2_w1, cq2_w5, sa_base, laddr,
- &loff, mbuf2, d_off, &f2,
- &ol_flags2, flags, &rearm2);
- mbuf23 = vsetq_lane_u64((uint64_t)mbuf2, mbuf23, 0);
-
- mbuf3 = nix_sec_meta_to_mbuf(cq3_w1, cq3_w5, sa_base, laddr,
- &loff, mbuf3, d_off, &f3,
- &ol_flags3, flags, &rearm3);
- mbuf23 = vsetq_lane_u64((uint64_t)mbuf3, mbuf23, 1);
+ if (cq0_w1 & BIT(11)) {
+ uintptr_t wqe = vgetq_lane_u64(wqe01, 0);
+ uintptr_t sa = vgetq_lane_u64(sa01, 0);
+ uint16_t len = vget_lane_u16(lens, 0);
+
+ cpth0 = (uintptr_t)mbuf0 + d_off;
+ /* Free meta to aura */
+ NIX_PUSH_META_TO_FREE(mbuf0, laddr, &loff);
+ mbuf01 = vsetq_lane_u64(wqe, mbuf01, 0);
+ mbuf0 = (struct rte_mbuf *)wqe;
+
+ /* Update pkt_len and data_len */
+ f0 = vsetq_lane_u16(len, f0, 2);
+ f0 = vsetq_lane_u16(len, f0, 4);
+
+ nix_sec_meta_to_mbuf(cq0_w1, cq0_w5, sa, cpth0,
+ mbuf0, &f0, &ol_flags0,
+ flags, &rearm0);
+ ol_flags0 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
+ (uint64_t)vget_lane_u8(ucc, 1) << 19);
+ }
+
+ if (cq1_w1 & BIT(11)) {
+ uintptr_t wqe = vgetq_lane_u64(wqe01, 1);
+ uintptr_t sa = vgetq_lane_u64(sa01, 1);
+ uint16_t len = vget_lane_u16(lens, 1);
+
+ cpth1 = (uintptr_t)mbuf1 + d_off;
+ /* Free meta to aura */
+ NIX_PUSH_META_TO_FREE(mbuf1, laddr, &loff);
+ mbuf01 = vsetq_lane_u64(wqe, mbuf01, 1);
+ mbuf1 = (struct rte_mbuf *)wqe;
+
+ /* Update pkt_len and data_len */
+ f1 = vsetq_lane_u16(len, f1, 2);
+ f1 = vsetq_lane_u16(len, f1, 4);
+
+ nix_sec_meta_to_mbuf(cq1_w1, cq1_w5, sa, cpth1,
+ mbuf1, &f1, &ol_flags1,
+ flags, &rearm1);
+ ol_flags1 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
+ (uint64_t)vget_lane_u8(ucc, 3) << 19);
+ }
+
+ if (cq2_w1 & BIT(11)) {
+ uintptr_t wqe = vgetq_lane_u64(wqe23, 0);
+ uintptr_t sa = vgetq_lane_u64(sa23, 0);
+ uint16_t len = vget_lane_u16(lens, 2);
+
+ cpth2 = (uintptr_t)mbuf2 + d_off;
+ /* Free meta to aura */
+ NIX_PUSH_META_TO_FREE(mbuf2, laddr, &loff);
+ mbuf23 = vsetq_lane_u64(wqe, mbuf23, 0);
+ mbuf2 = (struct rte_mbuf *)wqe;
+
+ /* Update pkt_len and data_len */
+ f2 = vsetq_lane_u16(len, f2, 2);
+ f2 = vsetq_lane_u16(len, f2, 4);
+
+ nix_sec_meta_to_mbuf(cq2_w1, cq2_w5, sa, cpth2,
+ mbuf2, &f2, &ol_flags2,
+ flags, &rearm2);
+ ol_flags2 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
+ (uint64_t)vget_lane_u8(ucc, 5) << 19);
+ }
+
+ if (cq3_w1 & BIT(11)) {
+ uintptr_t wqe = vgetq_lane_u64(wqe23, 1);
+ uintptr_t sa = vgetq_lane_u64(sa23, 1);
+ uint16_t len = vget_lane_u16(lens, 3);
+
+ cpth3 = (uintptr_t)mbuf3 + d_off;
+ /* Free meta to aura */
+ NIX_PUSH_META_TO_FREE(mbuf3, laddr, &loff);
+ mbuf23 = vsetq_lane_u64(wqe, mbuf23, 1);
+ mbuf3 = (struct rte_mbuf *)wqe;
+
+ /* Update pkt_len and data_len */
+ f3 = vsetq_lane_u16(len, f3, 2);
+ f3 = vsetq_lane_u16(len, f3, 4);
+
+ nix_sec_meta_to_mbuf(cq3_w1, cq3_w5, sa, cpth3,
+ mbuf3, &f3, &ol_flags3,
+ flags, &rearm3);
+ ol_flags3 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
+ (uint64_t)vget_lane_u8(ucc, 7) << 19);
+ }
}
if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 20/28] net/cnxk: update olflags with L3/L4 csum offload
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (17 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 19/28] net/cnxk: optimize Rx fast path for security pkts Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 21/28] net/cnxk: add capabilities for IPsec crypto algos Nithin Dabilpuram
` (7 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal
From: Akhil Goyal <gakhil@marvell.com>
When the packet is processed with inline IPsec offload,
the ol_flags were updated only with RTE_MBUF_F_RX_SEC_OFFLOAD.
But the hardware can also update the L3/L4 csum offload flags.
Hence, ol_flags are updated with RTE_MBUF_F_RX_IP_CKSUM_GOOD,
RTE_MBUF_F_RX_L4_CKSUM_GOOD, etc based on the microcode completion
codes.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_rx.h | 51 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 50 insertions(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 14b634e..00bec01 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -42,6 +42,18 @@
(uint64_t *)(((uintptr_t)((uint64_t *)(b))[i]) - (o)) : \
(uint64_t *)(((uintptr_t)(b)) + CQE_SZ(i) - (o)))
+#define NIX_RX_SEC_UCC_CONST \
+ ((RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1) << 8 | \
+ ((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1) \
+ << 24 | \
+ ((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1) \
+ << 32 | \
+ ((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1) \
+ << 40 | \
+ ((RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1) \
+ << 48 | \
+ (RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1) << 56)
+
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
static inline void
nix_mbuf_validate_next(struct rte_mbuf *m)
@@ -467,6 +479,11 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
RTE_MBUF_F_RX_SEC_OFFLOAD :
(RTE_MBUF_F_RX_SEC_OFFLOAD |
RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
+
+ ucc = hdr->w3.uc_ccode;
+ inner->ol_flags |= ((ucc & 0xF0) == 0xF0) ?
+ ((NIX_RX_SEC_UCC_CONST >> ((ucc & 0xF) << 3))
+ & 0xFF) << 1 : 0;
} else if (!(hdr->w0.err_sum) && !(hdr->w0.reas_sts)) {
/* Reassembly success */
inner = nix_sec_reassemble_frags(hdr, cq_w1, cq_w5,
@@ -529,6 +546,11 @@ nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, uint64_t cq_w5, const uint64_t sa_base,
(RTE_MBUF_F_RX_SEC_OFFLOAD |
RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED));
+ ucc = hdr->w3.uc_ccode;
+ inner->ol_flags |= ((ucc & 0xF0) == 0xF0) ?
+ ((NIX_RX_SEC_UCC_CONST >> ((ucc & 0xF) << 3))
+ & 0xFF) << 1 : 0;
+
/* Store meta in lmtline to free
* Assume all meta's from same aura.
*/
@@ -1313,7 +1335,26 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
sa23 = vaddq_u64(sa23, vdupq_n_u64(sa_base));
const uint8x16_t tbl = {
- 0, 0, 0, 0, 0, 0, 0, 0,
+ /* ROC_IE_OT_UCC_SUCCESS_SA_SOFTEXP_FIRST */
+ 0,
+ /* ROC_IE_OT_UCC_SUCCESS_PKT_IP_BADCSUM */
+ RTE_MBUF_F_RX_IP_CKSUM_BAD >> 1,
+ /* ROC_IE_OT_UCC_SUCCESS_SA_SOFTEXP_AGAIN */
+ 0,
+ /* ROC_IE_OT_UCC_SUCCESS_PKT_L4_GOODCSUM */
+ (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+ /* ROC_IE_OT_UCC_SUCCESS_PKT_L4_BADCSUM */
+ (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_BAD) >> 1,
+ /* ROC_IE_OT_UCC_SUCCESS_PKT_UDPESP_NZCSUM */
+ (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+ /* ROC_IE_OT_UCC_SUCCESS_PKT_UDP_ZEROCSUM */
+ (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD) >> 1,
+ /* ROC_IE_OT_UCC_SUCCESS_PKT_IP_GOODCSUM */
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD >> 1,
/* HW_CCODE -> RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED */
1, 0, 1, 1, 1, 1, 0, 1,
};
@@ -1419,6 +1460,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
nix_sec_meta_to_mbuf(cq0_w1, cq0_w5, sa, cpth0,
mbuf0, &f0, &ol_flags0,
flags, &rearm0);
+ ol_flags0 |= ((uint64_t)vget_lane_u8(ucc, 0))
+ << 1;
ol_flags0 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
(uint64_t)vget_lane_u8(ucc, 1) << 19);
}
@@ -1441,6 +1484,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
nix_sec_meta_to_mbuf(cq1_w1, cq1_w5, sa, cpth1,
mbuf1, &f1, &ol_flags1,
flags, &rearm1);
+ ol_flags1 |= ((uint64_t)vget_lane_u8(ucc, 2))
+ << 1;
ol_flags1 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
(uint64_t)vget_lane_u8(ucc, 3) << 19);
}
@@ -1463,6 +1508,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
nix_sec_meta_to_mbuf(cq2_w1, cq2_w5, sa, cpth2,
mbuf2, &f2, &ol_flags2,
flags, &rearm2);
+ ol_flags2 |= ((uint64_t)vget_lane_u8(ucc, 4))
+ << 1;
ol_flags2 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
(uint64_t)vget_lane_u8(ucc, 5) << 19);
}
@@ -1485,6 +1532,8 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
nix_sec_meta_to_mbuf(cq3_w1, cq3_w5, sa, cpth3,
mbuf3, &f3, &ol_flags3,
flags, &rearm3);
+ ol_flags3 |= ((uint64_t)vget_lane_u8(ucc, 6))
+ << 1;
ol_flags3 |= (RTE_MBUF_F_RX_SEC_OFFLOAD |
(uint64_t)vget_lane_u8(ucc, 7) << 19);
}
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 21/28] net/cnxk: add capabilities for IPsec crypto algos
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (18 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 20/28] net/cnxk: update olflags with L3/L4 csum offload Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 22/28] net/cnxk: add capabilities for IPsec options Nithin Dabilpuram
` (6 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal
From: Akhil Goyal <gakhil@marvell.com>
Added supported crypto algorithms for inline IPsec
offload.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 166 ++++++++++++++++++++++++++++++++++++
1 file changed, 166 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index f32e169..6a3e636 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -62,6 +62,46 @@ static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
}, }
}, }
},
+ { /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ }
+ }, }
+ }, }
+ },
+ { /* AES-XCBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ { .sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0,
+ },
+ }, }
+ }, }
+ },
{ /* SHA1 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -82,6 +122,132 @@ static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
}, }
}, }
},
+ { /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 1024,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 16
+ },
+ }, }
+ }, }
+ },
+ { /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 1024,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 24,
+ .max = 48,
+ .increment = 24
+ },
+ }, }
+ }, }
+ },
+ { /* SHA512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 1024,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ }, }
+ }, }
+ },
+ { /* AES GMAC (AUTH) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 8,
+ .max = 16,
+ .increment = 4
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 12,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ { /* NULL (AUTH) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_NULL,
+ .block_size = 1,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ }, },
+ }, },
+ },
+ { /* NULL (CIPHER) */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_NULL,
+ .block_size = 1,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, },
+ }, }
+ },
+
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 22/28] net/cnxk: add capabilities for IPsec options
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (19 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 21/28] net/cnxk: add capabilities for IPsec crypto algos Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 23/28] net/cnxk: support security stats Nithin Dabilpuram
` (5 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal, Vamsi Attunuru
From: Akhil Goyal <gakhil@marvell.com>
Added supported capabilities for various IPsec SA options.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 57 ++++++++++++++++++++++++++++++++++---
1 file changed, 53 insertions(+), 4 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 6a3e636..7e4941d 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -259,7 +259,20 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
+ .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .options = {
+ .udp_encap = 1,
+ .udp_ports_verify = 1,
+ .copy_df = 1,
+ .copy_dscp = 1,
+ .copy_flabel = 1,
+ .tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR,
+ .dec_ttl = 1,
+ .ip_csum_enable = 1,
+ .l4_csum_enable = 1,
+ .stats = 0,
+ .esn = 1,
+ },
},
.crypto_capabilities = cn10k_eth_sec_crypto_caps,
.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
@@ -271,7 +284,20 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
+ .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .options = {
+ .iv_gen_disable = 1,
+ .udp_encap = 1,
+ .udp_ports_verify = 1,
+ .copy_df = 1,
+ .copy_dscp = 1,
+ .copy_flabel = 1,
+ .dec_ttl = 1,
+ .ip_csum_enable = 1,
+ .l4_csum_enable = 1,
+ .stats = 0,
+ .esn = 1,
+ },
},
.crypto_capabilities = cn10k_eth_sec_crypto_caps,
.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
@@ -283,7 +309,19 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
+ .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .options = {
+ .iv_gen_disable = 1,
+ .udp_encap = 1,
+ .udp_ports_verify = 1,
+ .copy_df = 1,
+ .copy_dscp = 1,
+ .dec_ttl = 1,
+ .ip_csum_enable = 1,
+ .l4_csum_enable = 1,
+ .stats = 0,
+ .esn = 1,
+ },
},
.crypto_capabilities = cn10k_eth_sec_crypto_caps,
.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
@@ -295,7 +333,18 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
+ .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .options = {
+ .udp_encap = 1,
+ .udp_ports_verify = 1,
+ .copy_df = 1,
+ .copy_dscp = 1,
+ .dec_ttl = 1,
+ .ip_csum_enable = 1,
+ .l4_csum_enable = 1,
+ .stats = 0,
+ .esn = 1,
+ },
},
.crypto_capabilities = cn10k_eth_sec_crypto_caps,
.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 23/28] net/cnxk: support security stats
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (20 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 22/28] net/cnxk: add capabilities for IPsec options Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 24/28] net/cnxk: add support for flow control for outbound inline Nithin Dabilpuram
` (4 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, Akhil Goyal, Vamsi Attunuru
From: Akhil Goyal <gakhil@marvell.com>
Enabled rte_security stats operation based on the configuration
of SA options set while creating session.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 56 ++++++++++++++++++++++++++++++++++---
1 file changed, 52 insertions(+), 4 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 7e4941d..7c4988b 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -270,7 +270,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.dec_ttl = 1,
.ip_csum_enable = 1,
.l4_csum_enable = 1,
- .stats = 0,
+ .stats = 1,
.esn = 1,
},
},
@@ -295,7 +295,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.dec_ttl = 1,
.ip_csum_enable = 1,
.l4_csum_enable = 1,
- .stats = 0,
+ .stats = 1,
.esn = 1,
},
},
@@ -319,7 +319,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.dec_ttl = 1,
.ip_csum_enable = 1,
.l4_csum_enable = 1,
- .stats = 0,
+ .stats = 1,
.esn = 1,
},
},
@@ -342,7 +342,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.dec_ttl = 1,
.ip_csum_enable = 1,
.l4_csum_enable = 1,
- .stats = 0,
+ .stats = 1,
.esn = 1,
},
},
@@ -679,6 +679,11 @@ cn10k_eth_sec_session_create(void *device,
inb_sa_dptr->w1.s.cookie =
rte_cpu_to_be_32(ipsec->spi & spi_mask);
+ if (ipsec->options.stats == 1) {
+ /* Enable mib counters */
+ inb_sa_dptr->w0.s.count_mib_bytes = 1;
+ inb_sa_dptr->w0.s.count_mib_pkts = 1;
+ }
/* Prepare session priv */
sess_priv.inb_sa = 1;
sess_priv.sa_idx = ipsec->spi & spi_mask;
@@ -761,6 +766,12 @@ cn10k_eth_sec_session_create(void *device,
/* Save rlen info */
cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+ if (ipsec->options.stats == 1) {
+ /* Enable mib counters */
+ outb_sa_dptr->w0.s.count_mib_bytes = 1;
+ outb_sa_dptr->w0.s.count_mib_pkts = 1;
+ }
+
/* Prepare session priv */
sess_priv.sa_idx = outb_priv->sa_idx;
sess_priv.roundup_byte = rlens->roundup_byte;
@@ -877,6 +888,42 @@ cn10k_eth_sec_capabilities_get(void *device __rte_unused)
return cn10k_eth_sec_capabilities;
}
+static int
+cn10k_eth_sec_session_stats_get(void *device, struct rte_security_session *sess,
+ struct rte_security_stats *stats)
+{
+ struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+ struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+ struct cnxk_eth_sec_sess *eth_sec;
+ int rc;
+
+ eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+ if (eth_sec == NULL)
+ return -EINVAL;
+
+ rc = roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+ ROC_NIX_INL_SA_OP_FLUSH);
+ if (rc)
+ return -EINVAL;
+ rte_delay_ms(1);
+
+ stats->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+
+ if (eth_sec->inb) {
+ stats->ipsec.ipackets =
+ ((struct roc_ot_ipsec_inb_sa *)eth_sec->sa)->ctx.mib_pkts;
+ stats->ipsec.ibytes =
+ ((struct roc_ot_ipsec_inb_sa *)eth_sec->sa)->ctx.mib_octs;
+ } else {
+ stats->ipsec.opackets =
+ ((struct roc_ot_ipsec_outb_sa *)eth_sec->sa)->ctx.mib_pkts;
+ stats->ipsec.obytes =
+ ((struct roc_ot_ipsec_outb_sa *)eth_sec->sa)->ctx.mib_octs;
+ }
+
+ return 0;
+}
+
void
cn10k_eth_sec_ops_override(void)
{
@@ -890,4 +937,5 @@ cn10k_eth_sec_ops_override(void)
cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create;
cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy;
cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get;
+ cnxk_eth_sec_ops.session_stats_get = cn10k_eth_sec_session_stats_get;
}
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 24/28] net/cnxk: add support for flow control for outbound inline
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (21 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 23/28] net/cnxk: support security stats Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 25/28] net/cnxk: perform early MTU setup for eventmode Nithin Dabilpuram
` (3 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Add support for flow control in outbound inline path using
fc updates from CPT.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 3 +++
drivers/net/cnxk/cn10k_ethdev.h | 1 +
drivers/net/cnxk/cn10k_tx.h | 37 ++++++++++++++++++++++++++++++++++++-
drivers/net/cnxk/cnxk_ethdev.c | 13 +++++++++++++
drivers/net/cnxk/cnxk_ethdev.h | 3 +++
5 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index d04b9eb..de688f0 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -204,6 +204,9 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
txq->cpt_io_addr = inl_lf->io_addr;
txq->cpt_fc = inl_lf->fc_addr;
+ txq->cpt_fc_sw = (int32_t *)((uintptr_t)dev->outb.fc_sw_mem +
+ crypto_qid * RTE_CACHE_LINE_SIZE);
+
txq->cpt_desc = inl_lf->nb_desc * 0.7;
txq->sa_base = (uint64_t)dev->outb.sa_base;
txq->sa_base |= eth_dev->data->port_id;
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index c8666ce..acfdbb6 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -19,6 +19,7 @@ struct cn10k_eth_txq {
uint64_t sa_base;
uint64_t *cpt_fc;
uint16_t cpt_desc;
+ int32_t *cpt_fc_sw;
uint64_t lso_tun_fmt;
uint64_t ts_mem;
uint64_t mark_flag : 8;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c482352..762586f 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -209,6 +209,37 @@ cn10k_nix_tx_skeleton(struct cn10k_eth_txq *txq, uint64_t *cmd,
}
static __rte_always_inline void
+cn10k_nix_sec_fc_wait(struct cn10k_eth_txq *txq, uint16_t nb_pkts)
+{
+ int32_t nb_desc, val, newval;
+ int32_t *fc_sw;
+ volatile uint64_t *fc;
+
+ /* Check if there is any CPT instruction to submit */
+ if (!nb_pkts)
+ return;
+
+again:
+ fc_sw = txq->cpt_fc_sw;
+ val = __atomic_sub_fetch(fc_sw, nb_pkts, __ATOMIC_RELAXED);
+ if (likely(val >= 0))
+ return;
+
+ nb_desc = txq->cpt_desc;
+ fc = txq->cpt_fc;
+ while (true) {
+ newval = nb_desc - __atomic_load_n(fc, __ATOMIC_RELAXED);
+ newval -= nb_pkts;
+ if (newval >= 0)
+ break;
+ }
+
+ if (!__atomic_compare_exchange_n(fc_sw, &val, newval, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED))
+ goto again;
+}
+
+static __rte_always_inline void
cn10k_nix_sec_steorl(uintptr_t io_addr, uint32_t lmt_id, uint8_t lnum,
uint8_t loff, uint8_t shft)
{
@@ -995,6 +1026,7 @@ cn10k_nix_xmit_pkts(void *tx_queue, uint64_t *ws, struct rte_mbuf **tx_pkts,
if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
/* Reduce pkts to be sent to CPT */
burst -= ((c_lnum << 1) + c_loff);
+ cn10k_nix_sec_fc_wait(txq, (c_lnum << 1) + c_loff);
cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
c_shft);
}
@@ -1138,6 +1170,7 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, uint64_t *ws,
if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
/* Reduce pkts to be sent to CPT */
burst -= ((c_lnum << 1) + c_loff);
+ cn10k_nix_sec_fc_wait(txq, (c_lnum << 1) + c_loff);
cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
c_shft);
}
@@ -2682,9 +2715,11 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
left -= burst;
/* Submit CPT instructions if any */
- if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+ if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+ cn10k_nix_sec_fc_wait(txq, (c_lnum << 1) + c_loff);
cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
c_shft);
+ }
/* Trigger LMTST */
if (lnum > 16) {
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index e1b1e16..12ff30f 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -155,9 +155,19 @@ nix_security_setup(struct cnxk_eth_dev *dev)
dev->outb.sa_base = roc_nix_inl_outb_sa_base_get(nix);
dev->outb.sa_bmap_mem = mem;
dev->outb.sa_bmap = bmap;
+
+ dev->outb.fc_sw_mem = plt_zmalloc(dev->outb.nb_crypto_qs *
+ RTE_CACHE_LINE_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (!dev->outb.fc_sw_mem) {
+ plt_err("Outbound fc sw mem alloc failed");
+ goto sa_bmap_free;
+ }
}
return 0;
+sa_bmap_free:
+ plt_free(dev->outb.sa_bmap_mem);
sa_dptr_free:
if (dev->inb.sa_dptr)
plt_free(dev->inb.sa_dptr);
@@ -253,6 +263,9 @@ nix_security_release(struct cnxk_eth_dev *dev)
plt_free(dev->outb.sa_dptr);
dev->outb.sa_dptr = NULL;
}
+
+ plt_free(dev->outb.fc_sw_mem);
+ dev->outb.fc_sw_mem = NULL;
}
dev->inb.inl_dev = false;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 7c7e013..28fc937 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -321,6 +321,9 @@ struct cnxk_eth_dev_sec_outb {
/* Crypto queues => CPT lf count */
uint16_t nb_crypto_qs;
+ /* FC sw mem */
+ uint64_t *fc_sw_mem;
+
/* Active sessions */
uint16_t nb_sess;
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 25/28] net/cnxk: perform early MTU setup for eventmode
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (22 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 24/28] net/cnxk: add support for flow control for outbound inline Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 26/28] common/cnxk: allow lesser inline inbound sa sizes Nithin Dabilpuram
` (2 subsequent siblings)
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Perform early MTU setup for event mode path in order
to update the Rx/Tx offload flags before Rx adapter setup
starts.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 11 +++++++++++
drivers/net/cnxk/cn9k_ethdev.c | 11 +++++++++++
2 files changed, 22 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index de688f0..bc9e10f 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -248,6 +248,17 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
if (rc)
return rc;
+ /* Do initial mtu setup for RQ0 before device start */
+ if (!qid) {
+ rc = nix_recalc_mtu(eth_dev);
+ if (rc)
+ return rc;
+
+ /* Update offload flags */
+ dev->rx_offload_flags = nix_rx_offload_flags(eth_dev);
+ dev->tx_offload_flags = nix_tx_offload_flags(eth_dev);
+ }
+
rq = &dev->rqs[qid];
cq = &dev->cqs[qid];
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 18cc27e..de33fa7 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -241,6 +241,17 @@ cn9k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
if (rc)
return rc;
+ /* Do initial mtu setup for RQ0 before device start */
+ if (!qid) {
+ rc = nix_recalc_mtu(eth_dev);
+ if (rc)
+ return rc;
+
+ /* Update offload flags */
+ dev->rx_offload_flags = nix_rx_offload_flags(eth_dev);
+ dev->tx_offload_flags = nix_tx_offload_flags(eth_dev);
+ }
+
rq = &dev->rqs[qid];
cq = &dev->cqs[qid];
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 26/28] common/cnxk: allow lesser inline inbound sa sizes
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (23 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 25/28] net/cnxk: perform early MTU setup for eventmode Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 27/28] net/cnxk: setup variable inline inbound SA Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path Nithin Dabilpuram
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Restructure SA setup to allow lesser inbound SA sizes as opposed
to full Inbound SA size of 1024B with max possible Anti-Replay
window. Since inbound SA size is variable, move the memset logic
out of common code.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_ie_ot.c | 4 ----
drivers/common/cnxk/roc_nix_inl.c | 9 ++++++++-
drivers/common/cnxk/roc_nix_inl.h | 26 +++++++++++++++++++++++---
3 files changed, 31 insertions(+), 8 deletions(-)
diff --git a/drivers/common/cnxk/roc_ie_ot.c b/drivers/common/cnxk/roc_ie_ot.c
index d0b7ad3..4b5823d 100644
--- a/drivers/common/cnxk/roc_ie_ot.c
+++ b/drivers/common/cnxk/roc_ie_ot.c
@@ -10,8 +10,6 @@ roc_ot_ipsec_inb_sa_init(struct roc_ot_ipsec_inb_sa *sa, bool is_inline)
{
size_t offset;
- memset(sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
-
if (is_inline) {
sa->w0.s.pkt_output = ROC_IE_OT_SA_PKT_OUTPUT_NO_FRAG;
sa->w0.s.pkt_format = ROC_IE_OT_SA_PKT_FMT_META;
@@ -33,8 +31,6 @@ roc_ot_ipsec_outb_sa_init(struct roc_ot_ipsec_outb_sa *sa)
{
size_t offset;
- memset(sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
-
offset = offsetof(struct roc_ot_ipsec_outb_sa, ctx);
sa->w0.s.ctx_push_size = (offset / ROC_CTX_UNIT_8B) + 1;
sa->w0.s.ctx_size = ROC_IE_OT_CTX_ILEN;
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 2c013cb..887d4ad 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -14,9 +14,16 @@ PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ ==
1UL << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2);
PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ ==
1UL << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2);
-PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ == 1024);
PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ ==
1UL << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ >=
+ ROC_NIX_INL_OT_IPSEC_INB_HW_SZ +
+ ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD);
+/* Allow lesser INB SA HW sizes */
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_HW_SZ <=
+ PLT_ALIGN(sizeof(struct roc_ot_ipsec_inb_sa), ROC_ALIGN));
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ ==
+ PLT_ALIGN(sizeof(struct roc_ot_ipsec_outb_sa), ROC_ALIGN));
static int
nix_inl_inb_sa_tbl_setup(struct roc_nix *roc_nix)
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 633f090..e7bcffc 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -23,13 +23,33 @@
#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2 8
/* OT INB HW area */
+#ifndef ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX
+#define ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX 4096u
+#endif
+#define ROC_NIX_INL_OT_IPSEC_AR_WINBITS_SZ \
+ (PLT_ALIGN_CEIL(ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX, \
+ BITS_PER_LONG_LONG) / \
+ BITS_PER_LONG_LONG)
+#define __ROC_NIX_INL_OT_IPSEC_INB_HW_SZ \
+ (offsetof(struct roc_ot_ipsec_inb_sa, ctx.ar_winbits) + \
+ sizeof(uint64_t) * ROC_NIX_INL_OT_IPSEC_AR_WINBITS_SZ)
#define ROC_NIX_INL_OT_IPSEC_INB_HW_SZ \
- PLT_ALIGN(sizeof(struct roc_ot_ipsec_inb_sa), ROC_ALIGN)
+ PLT_ALIGN(__ROC_NIX_INL_OT_IPSEC_INB_HW_SZ, ROC_ALIGN)
/* OT INB SW reserved area */
+#ifndef ROC_NIX_INL_INB_POST_PROCESS
+#define ROC_NIX_INL_INB_POST_PROCESS 1
+#endif
+#if ROC_NIX_INL_INB_POST_PROCESS == 0
+#define ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD 0
+#else
#define ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD 128
+#endif
+
#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ \
- (ROC_NIX_INL_OT_IPSEC_INB_HW_SZ + ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD)
-#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2 10
+ (1UL << (64 - __builtin_clzll(ROC_NIX_INL_OT_IPSEC_INB_HW_SZ + \
+ ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD - 1)))
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2 \
+ __builtin_ctzll(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ)
/* OT OUTB HW area */
#define ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ \
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 27/28] net/cnxk: setup variable inline inbound SA
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (24 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 26/28] common/cnxk: allow lesser inline inbound sa sizes Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path Nithin Dabilpuram
26 siblings, 0 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev
Setup inline inbound SA assuming variable size defined
at compile time.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 7c4988b..65519ee 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -259,7 +259,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .replay_win_sz_max = ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX,
.options = {
.udp_encap = 1,
.udp_ports_verify = 1,
@@ -284,7 +284,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .replay_win_sz_max = ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX,
.options = {
.iv_gen_disable = 1,
.udp_encap = 1,
@@ -309,7 +309,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .replay_win_sz_max = ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX,
.options = {
.iv_gen_disable = 1,
.udp_encap = 1,
@@ -333,7 +333,7 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .replay_win_sz_max = ROC_AR_WIN_SIZE_MAX,
+ .replay_win_sz_max = ROC_NIX_INL_OT_IPSEC_AR_WIN_SZ_MAX,
.options = {
.udp_encap = 1,
.udp_ports_verify = 1,
@@ -658,7 +658,7 @@ cn10k_eth_sec_session_create(void *device,
}
inb_sa_dptr = (struct roc_ot_ipsec_inb_sa *)dev->inb.sa_dptr;
- memset(inb_sa_dptr, 0, sizeof(struct roc_ot_ipsec_inb_sa));
+ memset(inb_sa_dptr, 0, ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
/* Fill inbound sa params */
rc = cnxk_ot_ipsec_inb_sa_fill(inb_sa_dptr, ipsec, crypto,
@@ -701,7 +701,7 @@ cn10k_eth_sec_session_create(void *device,
/* Sync session in context cache */
rc = roc_nix_inl_ctx_write(&dev->nix, inb_sa_dptr, eth_sec->sa,
eth_sec->inb,
- sizeof(struct roc_ot_ipsec_inb_sa));
+ ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
if (rc)
goto mempool_put;
@@ -731,7 +731,7 @@ cn10k_eth_sec_session_create(void *device,
rlens = &outb_priv->rlens;
outb_sa_dptr = (struct roc_ot_ipsec_outb_sa *)dev->outb.sa_dptr;
- memset(outb_sa_dptr, 0, sizeof(struct roc_ot_ipsec_outb_sa));
+ memset(outb_sa_dptr, 0, ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
/* Fill outbound sa params */
rc = cnxk_ot_ipsec_outb_sa_fill(outb_sa_dptr, ipsec, crypto);
@@ -795,7 +795,7 @@ cn10k_eth_sec_session_create(void *device,
/* Sync session in context cache */
rc = roc_nix_inl_ctx_write(&dev->nix, outb_sa_dptr, eth_sec->sa,
eth_sec->inb,
- sizeof(struct roc_ot_ipsec_outb_sa));
+ ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
if (rc)
goto mempool_put;
}
@@ -846,21 +846,23 @@ cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
if (eth_sec->inb) {
/* Disable SA */
sa_dptr = dev->inb.sa_dptr;
+ memset(sa_dptr, 0, ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
roc_ot_ipsec_inb_sa_init(sa_dptr, true);
roc_nix_inl_ctx_write(&dev->nix, sa_dptr, eth_sec->sa,
eth_sec->inb,
- sizeof(struct roc_ot_ipsec_inb_sa));
+ ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
dev->inb.nb_sess--;
} else {
/* Disable SA */
sa_dptr = dev->outb.sa_dptr;
+ memset(sa_dptr, 0, ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
roc_ot_ipsec_outb_sa_init(sa_dptr);
roc_nix_inl_ctx_write(&dev->nix, sa_dptr, eth_sec->sa,
eth_sec->inb,
- sizeof(struct roc_ot_ipsec_outb_sa));
+ ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
/* Release Outbound SA index */
cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
` (25 preceding siblings ...)
2022-04-22 10:47 ` [PATCH v2 27/28] net/cnxk: setup variable inline inbound SA Nithin Dabilpuram
@ 2022-04-22 10:47 ` Nithin Dabilpuram
2022-04-22 10:54 ` Pavan Nikhilesh Bhagavatula
2022-05-03 17:36 ` Jerin Jacob
26 siblings, 2 replies; 31+ messages in thread
From: Nithin Dabilpuram @ 2022-04-22 10:47 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: dev, pbhagavatula, stable
Fix multi-seg extraction in vwqe path to avoid updating mbuf[]
array until it is used via cq0 path.
Fixes: 7fbbc981d54f ("event/cnxk: support vectorized Rx event fast path")
Cc: pbhagavatula@marvell.com
Cc: stable@dpdk.org
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_rx.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 00bec01..5ecb20f 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -1673,10 +1673,6 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
- /* Store the mbufs to rx_pkts */
- vst1q_u64((uint64_t *)&mbufs[packets], mbuf01);
- vst1q_u64((uint64_t *)&mbufs[packets + 2], mbuf23);
-
if (flags & NIX_RX_MULTI_SEG_F) {
/* Multi segment is enable build mseg list for
* individual mbufs in scalar mode.
@@ -1695,6 +1691,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
mbuf3, mbuf_initializer, flags);
}
+ /* Store the mbufs to rx_pkts */
+ vst1q_u64((uint64_t *)&mbufs[packets], mbuf01);
+ vst1q_u64((uint64_t *)&mbufs[packets + 2], mbuf23);
+
/* Mark mempool obj as "get" as it is alloc'ed by NIX */
RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
--
2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path
2022-04-22 10:47 ` [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path Nithin Dabilpuram
@ 2022-04-22 10:54 ` Pavan Nikhilesh Bhagavatula
2022-05-03 17:36 ` Jerin Jacob
1 sibling, 0 replies; 31+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2022-04-22 10:54 UTC (permalink / raw)
To: Nithin Kumar Dabilpuram, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda,
Sunil Kumar Kori, Satha Koteswara Rao Kottidi
Cc: dev, stable
> -----Original Message-----
> From: Nithin Dabilpuram <ndabilpuram@marvell.com>
> Sent: Friday, April 22, 2022 4:17 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar Kokkilagadda
> <kirankumark@marvell.com>; Sunil Kumar Kori <skori@marvell.com>; Satha
> Koteswara Rao Kottidi <skoteshwar@marvell.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>; stable@dpdk.org
> Subject: [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path
>
> Fix multi-seg extraction in vwqe path to avoid updating mbuf[]
> array until it is used via cq0 path.
>
> Fixes: 7fbbc981d54f ("event/cnxk: support vectorized Rx event fast path")
> Cc: pbhagavatula@marvell.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> drivers/net/cnxk/cn10k_rx.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
> index 00bec01..5ecb20f 100644
> --- a/drivers/net/cnxk/cn10k_rx.h
> +++ b/drivers/net/cnxk/cn10k_rx.h
> @@ -1673,10 +1673,6 @@ cn10k_nix_recv_pkts_vector(void *args, struct
> rte_mbuf **mbufs, uint16_t pkts,
> vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
> vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
>
> - /* Store the mbufs to rx_pkts */
> - vst1q_u64((uint64_t *)&mbufs[packets], mbuf01);
> - vst1q_u64((uint64_t *)&mbufs[packets + 2], mbuf23);
> -
> if (flags & NIX_RX_MULTI_SEG_F) {
> /* Multi segment is enable build mseg list for
> * individual mbufs in scalar mode.
> @@ -1695,6 +1691,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct
> rte_mbuf **mbufs, uint16_t pkts,
> mbuf3, mbuf_initializer, flags);
> }
>
> + /* Store the mbufs to rx_pkts */
> + vst1q_u64((uint64_t *)&mbufs[packets], mbuf01);
> + vst1q_u64((uint64_t *)&mbufs[packets + 2], mbuf23);
> +
> /* Mark mempool obj as "get" as it is alloc'ed by NIX */
> RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void
> **)&mbuf0, 1, 1);
> RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void
> **)&mbuf1, 1, 1);
> --
> 2.8.4
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 04/28] common/cnxk: support to configure the ts pkind in CPT
2022-04-22 10:46 ` [PATCH v2 04/28] common/cnxk: support to configure the ts pkind in CPT Nithin Dabilpuram
@ 2022-04-26 10:12 ` Ray Kinsella
0 siblings, 0 replies; 31+ messages in thread
From: Ray Kinsella @ 2022-04-26 10:12 UTC (permalink / raw)
To: Nithin Dabilpuram
Cc: jerinj, Kiran Kumar K, Sunil Kumar Kori, Satha Rao, dev,
Vidya Sagar Velumuri
Nithin Dabilpuram <ndabilpuram@marvell.com> writes:
> From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
>
> Add new API to configure the SA table entries with new CPT PKIND
> when timestamp is enabled.
>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> ---
> drivers/common/cnxk/roc_nix_inl.c | 59 ++++++++++++++++++++++++++++++++++
> drivers/common/cnxk/roc_nix_inl.h | 2 ++
> drivers/common/cnxk/roc_nix_inl_priv.h | 1 +
> drivers/common/cnxk/version.map | 1 +
> 4 files changed, 63 insertions(+)
>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path
2022-04-22 10:47 ` [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path Nithin Dabilpuram
2022-04-22 10:54 ` Pavan Nikhilesh Bhagavatula
@ 2022-05-03 17:36 ` Jerin Jacob
1 sibling, 0 replies; 31+ messages in thread
From: Jerin Jacob @ 2022-05-03 17:36 UTC (permalink / raw)
To: Nithin Dabilpuram
Cc: Jerin Jacob, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
dpdk-dev, Pavan Nikhilesh, dpdk stable
On Fri, Apr 22, 2022 at 4:20 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> Fix multi-seg extraction in vwqe path to avoid updating mbuf[]
> array until it is used via cq0 path.
>
> Fixes: 7fbbc981d54f ("event/cnxk: support vectorized Rx event fast path")
> Cc: pbhagavatula@marvell.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Please fix following check-gitlog.sh errors
Wrong headline prefix:
net/cnxk: add receive channel backpressure for SDP
Is it candidate for Cc: stable@dpdk.org backport?
common/cnxk: fix SQ flush sequence
common/cnxk: fix issues in soft expiry disable path
net/cnxk: optimize Rx fast path for security pkts
> ---
> drivers/net/cnxk/cn10k_rx.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
> index 00bec01..5ecb20f 100644
> --- a/drivers/net/cnxk/cn10k_rx.h
> +++ b/drivers/net/cnxk/cn10k_rx.h
> @@ -1673,10 +1673,6 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
> vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
> vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
>
> - /* Store the mbufs to rx_pkts */
> - vst1q_u64((uint64_t *)&mbufs[packets], mbuf01);
> - vst1q_u64((uint64_t *)&mbufs[packets + 2], mbuf23);
> -
> if (flags & NIX_RX_MULTI_SEG_F) {
> /* Multi segment is enable build mseg list for
> * individual mbufs in scalar mode.
> @@ -1695,6 +1691,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
> mbuf3, mbuf_initializer, flags);
> }
>
> + /* Store the mbufs to rx_pkts */
> + vst1q_u64((uint64_t *)&mbufs[packets], mbuf01);
> + vst1q_u64((uint64_t *)&mbufs[packets + 2], mbuf23);
> +
> /* Mark mempool obj as "get" as it is alloc'ed by NIX */
> RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
> RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
> --
> 2.8.4
>
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2022-05-03 17:37 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-22 10:46 [PATCH v2 01/28] common/cnxk: add multi channel support for SDP send queues Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 02/28] net/cnxk: add receive channel backpressure for SDP Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 03/28] common/cnxk: add new pkind for CPT when ts is enabled Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 04/28] common/cnxk: support to configure the ts pkind in CPT Nithin Dabilpuram
2022-04-26 10:12 ` Ray Kinsella
2022-04-22 10:46 ` [PATCH v2 05/28] common/cnxk: fix SQ flush sequence Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 06/28] common/cnxk: skip probing SoC environment for CN9k Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 07/28] common/cnxk: fix issues in soft expiry disable path Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 08/28] common/cnxk: convert warning to debug print Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 09/28] common/cnxk: use aggregate level rr prio from mbox Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 10/28] net/cnxk: support loopback mode on AF VF's Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 11/28] net/cnxk: update LBK ethdev link info Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 12/28] net/cnxk: add barrier after meta batch free in scalar Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 13/28] net/cnxk: disable default inner chksum for outb inline Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 14/28] net/cnxk: fix roundup size with transport mode Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 15/28] net/cnxk: update inline device in ethdev telemetry Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 16/28] net/cnxk: change env for debug IV Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 17/28] net/cnxk: reset offload flag if reassembly is disabled Nithin Dabilpuram
2022-04-22 10:46 ` [PATCH v2 18/28] net/cnxk: support decrement TTL for inline IPsec Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 19/28] net/cnxk: optimize Rx fast path for security pkts Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 20/28] net/cnxk: update olflags with L3/L4 csum offload Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 21/28] net/cnxk: add capabilities for IPsec crypto algos Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 22/28] net/cnxk: add capabilities for IPsec options Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 23/28] net/cnxk: support security stats Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 24/28] net/cnxk: add support for flow control for outbound inline Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 25/28] net/cnxk: perform early MTU setup for eventmode Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 26/28] common/cnxk: allow lesser inline inbound sa sizes Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 27/28] net/cnxk: setup variable inline inbound SA Nithin Dabilpuram
2022-04-22 10:47 ` [PATCH v2 28/28] net/cnxk: fix multi-seg extraction in vwqe path Nithin Dabilpuram
2022-04-22 10:54 ` Pavan Nikhilesh Bhagavatula
2022-05-03 17:36 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).