* [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion
@ 2023-03-03 8:09 Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 02/15] net/cnxk: fix data len for first seg with multi seg pkt Nithin Dabilpuram
` (13 more replies)
0 siblings, 14 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:09 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Rakesh Kudurumalla
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
while sending external buffers from application
if tx_compl_ena is not provided as devargs, cq for
transmit completion is not initialized but the same
is accessed in driver resulting in segfault.
This patch fixes the same by calling callback handler
before the actual packet is transmitted
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
Depends-on: series-27133 ("common/cnxk: add cnf10ka A1 platform")
drivers/net/cnxk/cn10k_tx.h | 4 ++++
drivers/net/cnxk/cn9k_tx.h | 4 ++++
2 files changed, 8 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 1c1ce9642a..d0f7bc22a4 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -650,6 +650,10 @@ cn10k_nix_prefree_seg(struct rte_mbuf *m, struct cn10k_eth_txq *txq,
uint32_t sqe_id;
if (RTE_MBUF_HAS_EXTBUF(m)) {
+ if (unlikely(txq->tx_compl.ena == 0)) {
+ rte_pktmbuf_free_seg(m);
+ return 1;
+ }
if (send_hdr->w0.pnc) {
txq->tx_compl.ptr[send_hdr->w1.sqe_id]->next = m;
} else {
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index b4ef45d65c..52661a624c 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -88,6 +88,10 @@ cn9k_nix_prefree_seg(struct rte_mbuf *m, struct cn9k_eth_txq *txq,
uint32_t sqe_id;
if (RTE_MBUF_HAS_EXTBUF(m)) {
+ if (unlikely(txq->tx_compl.ena == 0)) {
+ rte_pktmbuf_free_seg(m);
+ return 1;
+ }
if (send_hdr->w0.pnc) {
txq->tx_compl.ptr[send_hdr->w1.sqe_id]->next = m;
} else {
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 02/15] net/cnxk: fix data len for first seg with multi seg pkt
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 03/15] net/cnxk: release LBK bpid for after freeing resources Nithin Dabilpuram
` (12 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: jerinj, dev
When coming from vector routine, the first seg length is
set to same as pkt len assuming it is a single segment pkt.
Read just the same in cn10k_nix_prepare_mseg() that is called
in case of mbuf-fast-free offload disabled.
In CN9K, clear other data len fields to avoid using stale data.
Fixes: 8520bce63379 ("net/cnxk: rework no-fast-free offload")
Fixes: ae2c2cb60635 ("net/cnxk: avoid command copy from Tx queue")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cn10k_tx.h | 24 +++++++++++++++++++-----
drivers/net/cnxk/cn9k_tx.h | 1 +
2 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index d0f7bc22a4..a72a803e10 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -1012,9 +1012,13 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq,
ol_flags = m->ol_flags;
/* Start from second segment, first segment is already there */
+ dlen = m->data_len;
is_sg2 = 0;
l_sg.u = sg->u;
- len -= l_sg.u & 0xFFFF;
+ /* Clear l_sg.u first seg length that might be stale from vector path */
+ l_sg.u &= ~0xFFFFUL;
+ l_sg.u |= dlen;
+ len -= dlen;
nb_segs = m->nb_segs - 1;
m_next = m->next;
slist = &cmd[3 + off + 1];
@@ -1940,7 +1944,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
uint64x2_t xtmp128, ytmp128;
uint64x2_t xmask01, xmask23;
uintptr_t c_laddr = laddr;
- uint8_t lnum, shift, loff;
+ uint8_t lnum, shift, loff = 0;
rte_iova_t c_io_addr;
uint64_t sa_base;
union wdata {
@@ -2059,10 +2063,20 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws,
!!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
}
- /* Check if there are enough LMTLINES for this loop */
- if (lnum + 4 > 32) {
+ /* Check if there are enough LMTLINES for this loop.
+ * Consider previous line to be partial.
+ */
+ if (lnum + 4 >= 32) {
uint8_t ldwords_con = 0, lneeded = 0;
- for (j = 0; j < NIX_DESCS_PER_LOOP; j++) {
+
+ if ((loff >> 4) + segdw[0] > 8) {
+ lneeded += 1;
+ ldwords_con = segdw[0];
+ } else {
+ ldwords_con = (loff >> 4) + segdw[0];
+ }
+
+ for (j = 1; j < NIX_DESCS_PER_LOOP; j++) {
ldwords_con += segdw[j];
if (ldwords_con > 8) {
lneeded += 1;
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index 52661a624c..e956c1ad2a 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -461,6 +461,7 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq,
/* Start from second segment, first segment is already there */
i = 1;
sg_u = sg->u;
+ sg_u &= 0xFC0000000000FFFF;
nb_segs = m->nb_segs - 1;
m_next = m->next;
slist = &cmd[3 + off + 1];
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 03/15] net/cnxk: release LBK bpid for after freeing resources
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 02/15] net/cnxk: fix data len for first seg with multi seg pkt Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 04/15] common/cnxk: add separate inline dev stats API Nithin Dabilpuram
` (11 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Rakesh Kudurumalla
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
bpids are not disabled while freeing resources for NIX
device as a result a new bpid is assigned which leads
to exhaustion of bpid's after soft exit of application.
This patch fixes the same
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 22072d29b0..e99335b117 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -900,6 +900,27 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
return flowkey_cfg;
}
+static int
+nix_rxchan_cfg_disable(struct cnxk_eth_dev *dev)
+{
+ struct roc_nix *nix = &dev->nix;
+ struct roc_nix_fc_cfg fc_cfg;
+ int rc;
+
+ if (!roc_nix_is_lbk(nix))
+ return 0;
+
+ memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg));
+ fc_cfg.type = ROC_NIX_FC_RXCHAN_CFG;
+ fc_cfg.rxchan_cfg.enable = false;
+ rc = roc_nix_fc_config_set(nix, &fc_cfg);
+ if (rc) {
+ plt_err("Failed to setup flow control, rc=%d(%s)", rc, roc_error_msg_get(rc));
+ return rc;
+ }
+ return 0;
+}
+
static void
nix_free_queue_mem(struct cnxk_eth_dev *dev)
{
@@ -1218,6 +1239,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
goto fail_configure;
roc_nix_tm_fini(nix);
+ nix_rxchan_cfg_disable(dev);
roc_nix_lf_free(nix);
}
@@ -1456,6 +1478,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
roc_nix_tm_fini(nix);
free_nix_lf:
nix_free_queue_mem(dev);
+ rc |= nix_rxchan_cfg_disable(dev);
rc |= roc_nix_lf_free(nix);
fail_configure:
dev->configured = 0;
@@ -2026,6 +2049,11 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
/* Free ROC RQ's, SQ's and CQ's memory */
nix_free_queue_mem(dev);
+ /* free nix bpid */
+ rc = nix_rxchan_cfg_disable(dev);
+ if (rc)
+ plt_err("Failed to free nix bpid, rc=%d", rc);
+
/* Free nix lf resources */
rc = roc_nix_lf_free(nix);
if (rc)
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 04/15] common/cnxk: add separate inline dev stats API
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 02/15] net/cnxk: fix data len for first seg with multi seg pkt Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 03/15] net/cnxk: release LBK bpid for after freeing resources Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 05/15] common/cnxk: distribute SQ's to sdp channels Nithin Dabilpuram
` (10 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Kommula Shiva Shankar
From: Kommula Shiva Shankar <kshankar@marvell.com>
This patch adds separate inline dev stats api,
thus avoiding expensive nix xstats call
Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.h | 1 +
drivers/common/cnxk/roc_nix_inl_dev.c | 33 +++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 1 +
3 files changed, 35 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 220663568e..3bb37ce225 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -152,6 +152,7 @@ bool __roc_api roc_nix_inl_dev_is_probed(void);
void __roc_api roc_nix_inl_dev_lock(void);
void __roc_api roc_nix_inl_dev_unlock(void);
int __roc_api roc_nix_inl_dev_xaq_realloc(uint64_t aura_handle);
+int __roc_api roc_nix_inl_dev_stats_get(struct roc_nix_stats *stats);
uint16_t __roc_api roc_nix_inl_dev_pffunc_get(void);
/* NIX Inline Inbound API */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index 6f60961bc7..196a04db09 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -15,6 +15,8 @@
ROC_NIX_LF_RX_CFG_IP6_UDP_OPT | ROC_NIX_LF_RX_CFG_DIS_APAD | \
ROC_NIX_LF_RX_CFG_LEN_IL3 | ROC_NIX_LF_RX_CFG_LEN_OL3)
+#define INL_NIX_RX_STATS(val) plt_read64(inl_dev->nix_base + NIX_LF_RX_STATX(val))
+
extern uint32_t soft_exp_consumer_cnt;
static bool soft_exp_poll_thread_exit = true;
@@ -832,6 +834,37 @@ nix_inl_outb_poll_thread_setup(struct nix_inl_dev *inl_dev)
return rc;
}
+int
+roc_nix_inl_dev_stats_get(struct roc_nix_stats *stats)
+{
+ struct idev_cfg *idev = idev_get_cfg();
+ struct nix_inl_dev *inl_dev = NULL;
+
+ if (stats == NULL)
+ return NIX_ERR_PARAM;
+
+ if (!idev && idev->nix_inl_dev)
+ inl_dev = idev->nix_inl_dev;
+
+ if (!inl_dev)
+ return -EINVAL;
+
+ stats->rx_octs = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_OCTS);
+ stats->rx_ucast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_UCAST);
+ stats->rx_bcast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_BCAST);
+ stats->rx_mcast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_MCAST);
+ stats->rx_drop = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_DROP);
+ stats->rx_drop_octs = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_DROP_OCTS);
+ stats->rx_fcs = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_FCS);
+ stats->rx_err = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_ERR);
+ stats->rx_drop_bcast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_BCAST);
+ stats->rx_drop_mcast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_MCAST);
+ stats->rx_drop_l3_bcast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_L3BCAST);
+ stats->rx_drop_l3_mcast = INL_NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_L3MCAST);
+
+ return 0;
+}
+
int
roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
{
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 5d2b75fb5a..6c69c425df 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -146,6 +146,7 @@ INTERNAL {
roc_nix_inl_dev_fini;
roc_nix_inl_dev_init;
roc_nix_inl_dev_is_probed;
+ roc_nix_inl_dev_stats_get;
roc_nix_inl_dev_lock;
roc_nix_inl_dev_pffunc_get;
roc_nix_inl_dev_rq;
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 05/15] common/cnxk: distribute SQ's to sdp channels
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (2 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 04/15] common/cnxk: add separate inline dev stats API Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 06/15] common/cnxk: remove flow control config at queue setup Nithin Dabilpuram
` (9 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Veerasenareddy Burru
From: Veerasenareddy Burru <vburru@marvell.com>
map SQ's to SDP channels using round-robin policy.
Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
---
drivers/common/cnxk/roc_nix_queue.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 287a489e7f..009c024064 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -1202,9 +1202,9 @@ sq_cn9k_fini(struct nix *nix, struct roc_nix_sq *sq)
}
static int
-sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
- uint16_t smq)
+sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum, uint16_t smq)
{
+ struct roc_nix *roc_nix = nix_priv_to_roc_nix(nix);
struct mbox *mbox = (&nix->dev)->mbox;
struct nix_cn10k_aq_enq_req *aq;
@@ -1220,7 +1220,10 @@ sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
aq->sq.max_sqe_size = sq->max_sqe_sz;
aq->sq.smq = smq;
aq->sq.smq_rr_weight = rr_quantum;
- aq->sq.default_chan = nix->tx_chan_base;
+ if (roc_nix_is_sdp(roc_nix))
+ aq->sq.default_chan = nix->tx_chan_base + (sq->qid % nix->tx_chan_cnt);
+ else
+ aq->sq.default_chan = nix->tx_chan_base;
aq->sq.sqe_stype = NIX_STYPE_STF;
aq->sq.ena = 1;
aq->sq.sso_ena = !!sq->sso_ena;
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 06/15] common/cnxk: remove flow control config at queue setup
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (3 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 05/15] common/cnxk: distribute SQ's to sdp channels Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 07/15] common/cnxk: enable 10K B0 support for inline IPsec Nithin Dabilpuram
` (8 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: jerinj, dev
Remove flow control default enable/disable from queue setup time
and move it to explicit per queue setup after device is configured
and as part of device start.
Also remove TM node ref count for flow control to avoid
ref count mismatch. For user tree, on user disabling flow control
or PFC on one SQ would clear the config from corresponding TM node
immediately irrespective of other SQ connections.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.c | 18 +++---
drivers/common/cnxk/roc_nix.h | 1 +
drivers/common/cnxk/roc_nix_debug.c | 2 +
drivers/common/cnxk/roc_nix_fc.c | 67 +++++++--------------
drivers/common/cnxk/roc_nix_priv.h | 7 ++-
drivers/common/cnxk/roc_nix_queue.c | 14 ++---
drivers/common/cnxk/roc_nix_tm.c | 81 +++++++++++---------------
drivers/common/cnxk/roc_nix_tm_ops.c | 2 +-
drivers/common/cnxk/roc_nix_tm_utils.c | 14 +----
9 files changed, 82 insertions(+), 124 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index fbf318a77d..97ef1c7133 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -214,6 +214,13 @@ roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq,
nix->tx_link = rsp->tx_link;
nix->nb_rx_queues = nb_rxq;
nix->nb_tx_queues = nb_txq;
+
+ nix->rqs = plt_zmalloc(sizeof(struct roc_nix_rq *) * nb_rxq, 0);
+ if (!nix->rqs) {
+ rc = -ENOMEM;
+ goto fail;
+ }
+
nix->sqs = plt_zmalloc(sizeof(struct roc_nix_sq *) * nb_txq, 0);
if (!nix->sqs) {
rc = -ENOMEM;
@@ -235,7 +242,9 @@ roc_nix_lf_free(struct roc_nix *roc_nix)
struct ndc_sync_op *ndc_req;
int rc = -ENOSPC;
+ plt_free(nix->rqs);
plt_free(nix->sqs);
+ nix->rqs = NULL;
nix->sqs = NULL;
/* Sync NDC-NIX for LF */
@@ -456,15 +465,6 @@ roc_nix_dev_init(struct roc_nix *roc_nix)
nix->reta_sz = reta_sz;
nix->mtu = ROC_NIX_DEFAULT_HW_FRS;
- /* Always start with full FC for LBK */
- if (nix->lbk_link) {
- nix->rx_pause = 1;
- nix->tx_pause = 1;
- } else if (!roc_nix_is_vf_or_sdp(roc_nix)) {
- /* Get the current state of flow control */
- roc_nix_fc_mode_get(roc_nix);
- }
-
/* Register error and ras interrupts */
rc = nix_register_irqs(nix);
if (rc)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 96756b1a2b..f04dd63e27 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -328,6 +328,7 @@ struct roc_nix_rq {
struct roc_nix *roc_nix;
uint64_t meta_aura_handle;
uint16_t inl_dev_refs;
+ uint8_t tc;
};
struct roc_nix_cq {
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 2f8c595bd9..97d86f9a97 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -879,6 +879,7 @@ roc_nix_rq_dump(struct roc_nix_rq *rq, FILE *file)
nix_dump(file, " vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
nix_dump(file, " roc_nix = %p", rq->roc_nix);
nix_dump(file, " inl_dev_refs = %d", rq->inl_dev_refs);
+ nix_dump(file, " tc = %d", rq->tc);
}
void
@@ -911,6 +912,7 @@ roc_nix_sq_dump(struct roc_nix_sq *sq, FILE *file)
nix_dump(file, " lmt_addr = %p", sq->lmt_addr);
nix_dump(file, " sqe_mem = %p", sq->sqe_mem);
nix_dump(file, " fc = %p", sq->fc);
+ nix_dump(file, " tc = %d", sq->tc);
};
static uint8_t
diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index 784e6e5416..39c16995cd 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -278,9 +278,12 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
static int
nix_fc_rq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
struct roc_nix_fc_cfg tmp;
- int sso_ena = 0;
+ struct roc_nix_rq *rq;
+ int sso_ena = 0, rc;
+ rq = nix->rqs[fc_cfg->rq_cfg.rq];
/* Check whether RQ is connected to SSO or not */
sso_ena = roc_nix_rq_is_sso_enable(roc_nix, fc_cfg->rq_cfg.rq);
if (sso_ena < 0)
@@ -299,7 +302,14 @@ nix_fc_rq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
tmp.cq_cfg.cq_drop = fc_cfg->rq_cfg.cq_drop;
tmp.cq_cfg.enable = fc_cfg->rq_cfg.enable;
- return nix_fc_cq_config_set(roc_nix, &tmp);
+ rc = nix_fc_cq_config_set(roc_nix, &tmp);
+ if (rc)
+ return rc;
+
+ rq->tc = fc_cfg->rq_cfg.enable ? fc_cfg->rq_cfg.tc : ROC_NIX_PFC_CLASS_INVALID;
+ plt_nix_dbg("RQ %u: TC %u %s", fc_cfg->rq_cfg.rq, fc_cfg->rq_cfg.tc,
+ fc_cfg->rq_cfg.enable ? "enabled" : "disabled");
+ return 0;
}
int
@@ -334,7 +344,7 @@ roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
else if (fc_cfg->type == ROC_NIX_FC_TM_CFG)
return nix_tm_bp_config_set(roc_nix, fc_cfg->tm_cfg.sq,
fc_cfg->tm_cfg.tc,
- fc_cfg->tm_cfg.enable, false);
+ fc_cfg->tm_cfg.enable);
return -EINVAL;
}
@@ -343,50 +353,17 @@ enum roc_nix_fc_mode
roc_nix_fc_mode_get(struct roc_nix *roc_nix)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
- struct dev *dev = &nix->dev;
- struct mbox *mbox = mbox_get(dev->mbox);
- struct cgx_pause_frm_cfg *req, *rsp;
enum roc_nix_fc_mode mode;
- int rc = -ENOSPC;
- /* Flow control on LBK link is always available */
- if (roc_nix_is_lbk(roc_nix)) {
- if (nix->tx_pause && nix->rx_pause)
- rc = ROC_NIX_FC_FULL;
- else if (nix->rx_pause)
- rc = ROC_NIX_FC_RX;
- else if (nix->tx_pause)
- rc = ROC_NIX_FC_TX;
- else
- rc = ROC_NIX_FC_NONE;
- goto exit;
- }
-
- req = mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- if (req == NULL)
- goto exit;
- req->set = 0;
-
- rc = mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto exit;
-
- if (rsp->rx_pause && rsp->tx_pause)
+ if (nix->tx_pause && nix->rx_pause)
mode = ROC_NIX_FC_FULL;
- else if (rsp->rx_pause)
+ else if (nix->rx_pause)
mode = ROC_NIX_FC_RX;
- else if (rsp->tx_pause)
+ else if (nix->tx_pause)
mode = ROC_NIX_FC_TX;
else
mode = ROC_NIX_FC_NONE;
-
- nix->rx_pause = rsp->rx_pause;
- nix->tx_pause = rsp->tx_pause;
- rc = mode;
-
-exit:
- mbox_put(mbox);
- return rc;
+ return mode;
}
int
@@ -570,8 +547,8 @@ roc_nix_pfc_mode_set(struct roc_nix *roc_nix, struct roc_nix_pfc_cfg *pfc_cfg)
if (rc)
goto exit;
- nix->rx_pause = rsp->rx_pause;
- nix->tx_pause = rsp->tx_pause;
+ nix->pfc_rx_pause = rsp->rx_pause;
+ nix->pfc_tx_pause = rsp->tx_pause;
if (rsp->tx_pause)
nix->cev |= BIT(pfc_cfg->tc);
else
@@ -592,11 +569,11 @@ roc_nix_pfc_mode_get(struct roc_nix *roc_nix, struct roc_nix_pfc_cfg *pfc_cfg)
pfc_cfg->tc = nix->cev;
- if (nix->rx_pause && nix->tx_pause)
+ if (nix->pfc_rx_pause && nix->pfc_tx_pause)
pfc_cfg->mode = ROC_NIX_FC_FULL;
- else if (nix->rx_pause)
+ else if (nix->pfc_rx_pause)
pfc_cfg->mode = ROC_NIX_FC_RX;
- else if (nix->tx_pause)
+ else if (nix->pfc_tx_pause)
pfc_cfg->mode = ROC_NIX_FC_TX;
else
pfc_cfg->mode = ROC_NIX_FC_NONE;
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 7d2e3626a3..2fe9093324 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -102,7 +102,6 @@ struct nix_tm_node {
/* Last stats */
uint64_t last_pkts;
uint64_t last_bytes;
- uint32_t tc_refcnt;
};
struct nix_tm_shaper_profile {
@@ -131,6 +130,7 @@ struct nix {
struct nix_qint *cints_mem;
uint8_t configured_qints;
uint8_t configured_cints;
+ struct roc_nix_rq **rqs;
struct roc_nix_sq **sqs;
uint16_t vwqe_interval;
uint16_t tx_chan_base;
@@ -158,6 +158,8 @@ struct nix {
uint16_t msixoff;
uint8_t rx_pause;
uint8_t tx_pause;
+ uint8_t pfc_rx_pause;
+ uint8_t pfc_tx_pause;
uint16_t cev;
uint64_t rx_cfg;
struct dev dev;
@@ -407,7 +409,7 @@ int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable);
int nix_tm_bp_config_get(struct roc_nix *roc_nix, bool *is_enabled);
int nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
- bool enable, bool force_flush);
+ bool enable);
void nix_rq_vwqe_flush(struct roc_nix_rq *rq, uint16_t vwqe_interval);
int nix_tm_mark_init(struct nix *nix);
void nix_tm_sq_free_sqe_buffer(uint64_t *sqe, int head_off, int end_off, int instr_sz);
@@ -469,6 +471,7 @@ int nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
uint16_t cints);
int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid,
__io void **ctx_p);
+uint8_t nix_tm_lbk_relchan_get(struct nix *nix);
/*
* Telemetry
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 009c024064..07ec1270d7 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -667,6 +667,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
}
rq->roc_nix = roc_nix;
+ rq->tc = ROC_NIX_PFC_CLASS_INVALID;
if (is_cn9k)
rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena);
@@ -695,6 +696,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
return rc;
}
+ nix->rqs[rq->qid] = rq;
return nix_tel_node_add_rq(rq);
}
@@ -718,6 +720,7 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
nix_rq_aura_buf_type_update(rq, false);
rq->roc_nix = roc_nix;
+ rq->tc = ROC_NIX_PFC_CLASS_INVALID;
mbox = mbox_get(m_box);
if (is_cn9k)
@@ -779,6 +782,7 @@ roc_nix_rq_cman_config(struct roc_nix *roc_nix, struct roc_nix_rq *rq)
int
roc_nix_rq_fini(struct roc_nix_rq *rq)
{
+ struct nix *nix = roc_nix_to_nix_priv(rq->roc_nix);
int rc;
/* Disabling RQ is sufficient */
@@ -788,6 +792,8 @@ roc_nix_rq_fini(struct roc_nix_rq *rq)
/* Update aura attribute to indicate its use for */
nix_rq_aura_buf_type_update(rq, false);
+
+ nix->rqs[rq->qid] = NULL;
return 0;
}
@@ -895,14 +901,6 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq)
}
}
- /* TX pause frames enable flow ctrl on RX side */
- if (nix->tx_pause) {
- /* Single BPID is allocated for all rx channels for now */
- cq_ctx->bpid = nix->bpid[0];
- cq_ctx->bp = cq->drop_thresh;
- cq_ctx->bp_ena = 1;
- }
-
rc = mbox_process(mbox);
mbox_put(mbox);
if (rc)
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 6d470f424d..c104611355 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -101,7 +101,6 @@ nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree)
{
struct nix_tm_node_list *list;
struct nix_tm_node *node;
- bool skip_bp = false;
uint32_t hw_lvl;
int rc = 0;
@@ -116,11 +115,8 @@ nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree)
* set per channel only for PF or lbk vf.
*/
node->bp_capa = 0;
- if (!nix->sdp_link && !skip_bp &&
- node->hw_lvl == nix->tm_link_cfg_lvl) {
+ if (!nix->sdp_link && node->hw_lvl == nix->tm_link_cfg_lvl)
node->bp_capa = 1;
- skip_bp = false;
- }
rc = nix_tm_node_reg_conf(nix, node);
if (rc)
@@ -315,7 +311,7 @@ nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node)
int
nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
- bool enable, bool force_flush)
+ bool enable)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
enum roc_nix_tm_tree tree = nix->tm_tree;
@@ -327,9 +323,10 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
struct nix_tm_node *parent;
struct nix_tm_node *node;
struct roc_nix_sq *sq_s;
+ uint16_t rel_chan = 0;
uint8_t parent_lvl;
uint8_t k = 0;
- int rc = 0;
+ int rc = 0, i;
sq_s = nix->sqs[sq];
if (!sq_s)
@@ -354,9 +351,17 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
list = nix_tm_node_list(nix, tree);
+ /* Get relative channel if loopback */
+ if (roc_nix_is_lbk(roc_nix))
+ rel_chan = nix_tm_lbk_relchan_get(nix);
+ else
+ rel_chan = tc;
+
/* Enable request, parent rel chan already configured */
if (enable && parent->rel_chan != NIX_TM_CHAN_INVALID &&
- parent->rel_chan != tc) {
+ parent->rel_chan != rel_chan) {
+ plt_err("SQ %d: parent node TL3 id %d already has rel_chan %d set",
+ sq, parent->hw_id, parent->rel_chan);
rc = -EINVAL;
goto err;
}
@@ -378,36 +383,21 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
continue;
/* Restrict sharing of TL3 across the queues */
- if (enable && node != parent && node->rel_chan == tc) {
- plt_err("SQ %d node TL3 id %d already has %d tc value set",
- sq, node->hw_id, tc);
- return -EINVAL;
+ if (enable && node != parent && node->rel_chan == rel_chan) {
+ plt_warn("SQ %d: siblng node TL3 %d already has %d(%d) tc value set",
+ sq, node->hw_id, tc, rel_chan);
+ return -EEXIST;
}
}
- /* In case of user tree i.e. multiple SQs may share a TL3, disabling PFC
- * on one of such SQ should not hamper the traffic control on other SQs.
- * Maitaining a reference count scheme to account no of SQs sharing the
- * TL3 before disabling PFC on it.
- */
- if (!force_flush && !enable &&
- parent->rel_chan != NIX_TM_CHAN_INVALID) {
- if (sq_s->tc != ROC_NIX_PFC_CLASS_INVALID)
- parent->tc_refcnt--;
- if (parent->tc_refcnt > 0)
- return 0;
- }
+ /* Allocating TL3 request */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox_get(mbox));
+ req->lvl = nix->tm_link_cfg_lvl;
+ k = 0;
- /* Allocating TL3 resources */
- if (!req) {
- req = mbox_alloc_msg_nix_txschq_cfg(mbox_get(mbox));
- req->lvl = nix->tm_link_cfg_lvl;
- k = 0;
- }
-
- /* Enable PFC on the identified TL3 */
+ /* Enable PFC/pause on the identified TL3 */
req->reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(parent->hw_id, link);
- req->regval[k] = enable ? tc : 0;
+ req->regval[k] = enable ? rel_chan : 0;
req->regval[k] |= enable ? BIT_ULL(13) : 0;
req->regval_mask[k] = ~(BIT_ULL(13) | GENMASK_ULL(7, 0));
k++;
@@ -417,12 +407,17 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc,
if (rc)
goto err;
- parent->rel_chan = enable ? tc : NIX_TM_CHAN_INVALID;
- /* Increase reference count for parent TL3 */
- if (enable && sq_s->tc == ROC_NIX_PFC_CLASS_INVALID)
- parent->tc_refcnt++;
+ parent->rel_chan = enable ? rel_chan : NIX_TM_CHAN_INVALID;
+ sq_s->tc = enable ? tc : ROC_NIX_PFC_CLASS_INVALID;
+ /* Clear other SQ's with same TC i.e same parent node */
+ for (i = 0; !enable && i < nix->nb_tx_queues; i++) {
+ if (nix->sqs[i] && nix->sqs[i]->tc == tc)
+ nix->sqs[i]->tc = ROC_NIX_PFC_CLASS_INVALID;
+ }
rc = 0;
+ plt_tm_dbg("SQ %u: TL3 %d TC %u %s",
+ sq, parent->hw_id, tc, enable ? "enabled" : "disabled");
goto exit;
err:
plt_err("Failed to %s bp on link %u, rc=%d(%s)",
@@ -802,7 +797,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
}
/* Disable backpressure */
- rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false, true);
+ rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false);
if (rc) {
plt_err("Failed to disable backpressure for flush, rc=%d", rc);
return rc;
@@ -942,16 +937,6 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq)
}
}
- if (!nix->rx_pause)
- return 0;
-
- /* Restore backpressure */
- rc = nix_tm_bp_config_set(roc_nix, sq->qid, sq->tc, true, false);
- if (rc) {
- plt_err("Failed to restore backpressure, rc=%d", rc);
- return rc;
- }
-
return 0;
}
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index 8fb65be9d4..4e88ad1beb 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -481,7 +481,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
if (!sq)
continue;
- rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false, false);
+ rc = nix_tm_bp_config_set(roc_nix, sq->qid, 0, false);
if (rc && rc != -ENOENT) {
plt_err("Failed to disable backpressure, rc=%d", rc);
goto cleanup;
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index a52b897713..a7ba2bf027 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -72,8 +72,8 @@ nix_tm_lvl2nix(struct nix *nix, uint32_t lvl)
return nix_tm_lvl2nix_tl2_root(lvl);
}
-static uint8_t
-nix_tm_relchan_get(struct nix *nix)
+uint8_t
+nix_tm_lbk_relchan_get(struct nix *nix)
{
return nix->tx_chan_base & 0xff;
}
@@ -531,7 +531,7 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
parent = node->parent->hw_id;
link = nix->tx_link;
- relchan = nix_tm_relchan_get(nix);
+ relchan = roc_nix_is_lbk(roc_nix) ? nix_tm_lbk_relchan_get(nix) : 0;
if (hw_lvl != NIX_TXSCH_LVL_SMQ)
child = nix_tm_find_prio_anchor(nix, node->id, tree);
@@ -602,10 +602,6 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
nix->tm_link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
regval[k] = BIT_ULL(12) | relchan;
- /* Enable BP if node is BP capable and rx_pause is set
- */
- if (nix->rx_pause && node->bp_capa)
- regval[k] |= BIT_ULL(13);
k++;
}
@@ -625,10 +621,6 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
nix->tm_link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
regval[k] = BIT_ULL(12) | relchan;
- /* Enable BP if node is BP capable and rx_pause is set
- */
- if (nix->rx_pause && node->bp_capa)
- regval[k] |= BIT_ULL(13);
k++;
}
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 07/15] common/cnxk: enable 10K B0 support for inline IPsec
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (4 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 06/15] common/cnxk: remove flow control config at queue setup Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 08/15] net/cnxk: check flow control config per queue on dev start Nithin Dabilpuram
` (7 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: jerinj, dev
Enable support similar to CN10KB as CN10KA_B0 is similar
to CN10KB.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_api.h | 3 +++
drivers/common/cnxk/roc_cpt.h | 2 --
drivers/common/cnxk/roc_features.h | 31 +++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_debug.c | 16 ++++++++-------
drivers/common/cnxk/roc_nix_fc.c | 10 ++++++++--
drivers/common/cnxk/roc_nix_inl.c | 11 ++--------
drivers/common/cnxk/roc_nix_inl.h | 1 -
drivers/common/cnxk/roc_nix_queue.c | 5 +++--
drivers/common/cnxk/version.map | 1 -
drivers/net/cnxk/cn10k_ethdev.c | 4 ++--
drivers/net/cnxk/cn10k_ethdev_sec.c | 2 +-
11 files changed, 59 insertions(+), 27 deletions(-)
create mode 100644 drivers/common/cnxk/roc_features.h
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 9d7f5417c2..993a2f7a68 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -47,6 +47,9 @@
/* HW Errata */
#include "roc_errata.h"
+/* HW Features */
+#include "roc_features.h"
+
/* Mbox */
#include "roc_mbox.h"
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 6966e0f10b..d3a5683dc8 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -9,8 +9,6 @@
#include "roc_platform.h"
-struct nix_inline_ipsec_cfg;
-
#define ROC_AE_CPT_BLOCK_TYPE1 0
#define ROC_AE_CPT_BLOCK_TYPE2 1
diff --git a/drivers/common/cnxk/roc_features.h b/drivers/common/cnxk/roc_features.h
new file mode 100644
index 0000000000..27bccd6b9c
--- /dev/null
+++ b/drivers/common/cnxk/roc_features.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+#ifndef _ROC_FEATURES_H_
+#define _ROC_FEATURES_H_
+
+static inline bool
+roc_feature_nix_has_inl_ipsec_mseg(void)
+{
+ return (roc_model_is_cn10kb() || roc_model_is_cn10ka_b0());
+}
+
+static inline bool
+roc_feature_nix_has_inl_rq_mask(void)
+{
+ return (roc_model_is_cn10kb() || roc_model_is_cn10ka_b0());
+}
+
+static inline bool
+roc_feature_nix_has_late_bp(void)
+{
+ return (roc_model_is_cn10kb() || roc_model_is_cn10ka_b0());
+}
+
+static inline bool
+roc_feature_nix_has_reass(void)
+{
+ return roc_model_is_cn10ka();
+}
+
+#endif
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 97d86f9a97..399d0d7eae 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -661,6 +661,12 @@ nix_lf_cq_dump(__io struct nix_cq_ctx_s *ctx, FILE *file)
ctx->qint_idx);
nix_dump(file, "W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n", ctx->bpid,
ctx->bp_ena);
+ nix_dump(file,
+ "W1: lbpid_high \t\t\t0x%03x\nW1: lbpid_med \t\t\t0x%03x\n"
+ "W1: lbpid_low \t\t\t0x%03x\n(W1: lbpid) \t\t\t0x%03x\n",
+ ctx->lbpid_high, ctx->lbpid_med, ctx->lbpid_low,
+ ctx->lbpid_high << 6 | ctx->lbpid_med << 3 | ctx->lbpid_low);
+ nix_dump(file, "W1: lbp_ena \t\t\t\t%d\n", ctx->lbp_ena);
nix_dump(file, "W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
ctx->update_time, ctx->avg_level);
@@ -671,14 +677,10 @@ nix_lf_cq_dump(__io struct nix_cq_ctx_s *ctx, FILE *file)
ctx->cq_err_int_ena, ctx->cq_err_int);
nix_dump(file, "W3: qsize \t\t\t%d\nW3: caching \t\t\t%d", ctx->qsize,
ctx->caching);
- nix_dump(file, "W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d\nW3: lbp_ena \t\t\t%d",
- ctx->substream, ctx->ena, ctx->lbp_ena);
- nix_dump(file,
- "W3: lbpid_high \t\t\t0x%03x\nW3: lbpid_med \t\t\t0x%03x\n"
- "W3: lbpid_low \t\t\t0x%03x\n(W3: lbpid) \t\t\t0x%03x",
- ctx->lbpid_high, ctx->lbpid_med, ctx->lbpid_low,
- ctx->lbpid_high << 6 | ctx->lbpid_med << 3 | ctx->lbpid_low);
nix_dump(file, "W3: lbp_frac \t\t\t%d\n", ctx->lbp_frac);
+ nix_dump(file, "W3: substream \t\t\t0x%03x\nW3: cpt_drop_err_en \t\t\t%d\n",
+ ctx->substream, ctx->cpt_drop_err_en);
+ nix_dump(file, "W3: ena \t\t\t%d\n", ctx->ena);
nix_dump(file, "W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d", ctx->drop_ena,
ctx->drop);
nix_dump(file, "W3: bp \t\t\t\t%d\n", ctx->bp);
diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index 39c16995cd..7574a88bf6 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -77,7 +77,10 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable)
if (req == NULL)
goto exit;
req->chan_base = 0;
- req->chan_cnt = 1;
+ if (roc_nix_is_lbk(roc_nix) || roc_nix_is_sdp(roc_nix))
+ req->chan_cnt = NIX_LBK_MAX_CHAN;
+ else
+ req->chan_cnt = NIX_CGX_MAX_CHAN;
req->bpid_per_chan = 0;
rc = mbox_process_msg(mbox, (void *)&rsp);
@@ -89,7 +92,10 @@ nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable)
if (req == NULL)
goto exit;
req->chan_base = 0;
- req->chan_cnt = 1;
+ if (roc_nix_is_lbk(roc_nix) || roc_nix_is_sdp(roc_nix))
+ req->chan_cnt = NIX_LBK_MAX_CHAN;
+ else
+ req->chan_cnt = NIX_CGX_MAX_CHAN;
req->bpid_per_chan = 0;
rc = mbox_process_msg(mbox, (void *)&rsp);
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 70b4ae9277..19f500ee54 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -485,13 +485,6 @@ nix_inl_rq_mask_cfg(struct roc_nix *roc_nix, bool enable)
return rc;
}
-bool
-roc_nix_has_reass_support(struct roc_nix *nix)
-{
- PLT_SET_USED(nix);
- return !!roc_model_is_cn10ka();
-}
-
int
roc_nix_inl_inb_init(struct roc_nix *roc_nix)
{
@@ -574,7 +567,7 @@ roc_nix_inl_inb_fini(struct roc_nix *roc_nix)
nix_inl_meta_aura_destroy();
}
- if (roc_model_is_cn10kb_a0()) {
+ if (roc_feature_nix_has_inl_rq_mask()) {
rc = nix_inl_rq_mask_cfg(roc_nix, false);
if (rc) {
plt_err("Failed to get rq mask rc=%d", rc);
@@ -1046,7 +1039,7 @@ roc_nix_inl_rq_ena_dis(struct roc_nix *roc_nix, bool enable)
if (!idev)
return -EFAULT;
- if (roc_model_is_cn10kb_a0()) {
+ if (roc_feature_nix_has_inl_rq_mask()) {
rc = nix_inl_rq_mask_cfg(roc_nix, true);
if (rc) {
plt_err("Failed to get rq mask rc=%d", rc);
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 3bb37ce225..105a9e4ec4 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -182,7 +182,6 @@ int __roc_api roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena,
bool inb_inl_dev);
int __roc_api roc_nix_inl_rq_ena_dis(struct roc_nix *roc_nix, bool ena);
int __roc_api roc_nix_inl_meta_aura_check(struct roc_nix_rq *rq);
-bool __roc_api roc_nix_has_reass_support(struct roc_nix *nix);
/* NIX Inline Outbound API */
int __roc_api roc_nix_inl_outb_init(struct roc_nix *roc_nix);
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 07ec1270d7..33b2cdf90f 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -863,7 +863,7 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq)
cq_ctx->avg_level = 0xff;
cq_ctx->cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
cq_ctx->cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
- if (roc_model_is_cn10kb() && roc_nix_inl_inb_is_enabled(roc_nix)) {
+ if (roc_feature_nix_has_late_bp() && roc_nix_inl_inb_is_enabled(roc_nix)) {
cq_ctx->cq_err_int_ena |= BIT(NIX_CQERRINT_CPT_DROP);
cq_ctx->cpt_drop_err_en = 1;
/* Enable Late BP only when non zero CPT BPID */
@@ -900,6 +900,7 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq)
cq_ctx->drop_ena = 1;
}
}
+ cq_ctx->bp = cq->drop_thresh;
rc = mbox_process(mbox);
mbox_put(mbox);
@@ -960,7 +961,7 @@ roc_nix_cq_fini(struct roc_nix_cq *cq)
aq->cq.bp_ena = 0;
aq->cq_mask.ena = ~aq->cq_mask.ena;
aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena;
- if (roc_model_is_cn10kb() && roc_nix_inl_inb_is_enabled(cq->roc_nix)) {
+ if (roc_feature_nix_has_late_bp() && roc_nix_inl_inb_is_enabled(cq->roc_nix)) {
aq->cq.lbp_ena = 0;
aq->cq_mask.lbp_ena = ~aq->cq_mask.lbp_ena;
}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 6c69c425df..53f2129e71 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -138,7 +138,6 @@ INTERNAL {
roc_nix_get_pf_func;
roc_nix_get_vf;
roc_nix_get_vwqe_interval;
- roc_nix_has_reass_support;
roc_nix_inl_cb_register;
roc_nix_inl_cb_unregister;
roc_nix_inl_ctx_write;
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index b84fed6d90..cb88bd2dc1 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -591,7 +591,7 @@ cn10k_nix_reassembly_capability_get(struct rte_eth_dev *eth_dev,
int rc = -ENOTSUP;
RTE_SET_USED(eth_dev);
- if (!roc_nix_has_reass_support(&dev->nix))
+ if (!roc_feature_nix_has_reass())
return -ENOTSUP;
if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
@@ -621,7 +621,7 @@ cn10k_nix_reassembly_conf_set(struct rte_eth_dev *eth_dev,
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
int rc = 0;
- if (!roc_nix_has_reass_support(&dev->nix))
+ if (!roc_feature_nix_has_reass())
return -ENOTSUP;
if (!conf->flags) {
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index ed5c335787..3c32de0f94 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -809,7 +809,7 @@ cn10k_eth_sec_session_create(void *device,
sess_priv.chksum = (!ipsec->options.ip_csum_enable << 1 |
!ipsec->options.l4_csum_enable);
sess_priv.dec_ttl = ipsec->options.dec_ttl;
- if (roc_model_is_cn10kb_a0())
+ if (roc_feature_nix_has_inl_ipsec_mseg())
sess_priv.nixtx_off = 1;
/* Pointer from eth_sec -> outb_sa */
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 08/15] net/cnxk: check flow control config per queue on dev start
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (5 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 07/15] common/cnxk: enable 10K B0 support for inline IPsec Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 09/15] net/cnxk: don't allow PFC configuration on started port Nithin Dabilpuram
` (6 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: jerinj, dev
Check and enable/disable flow control config per queue on
device start to handle cases like SSO enablement, TM changes etc.
Modify flow control config get to get status per RQ/SQ.
Also disallow changes to flow control config when device
is in started state.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 198 ++++++++++++++++-------------
2 files changed, 113 insertions(+), 94 deletions(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index e99335b117..d8ccd307a8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -363,7 +363,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct cnxk_fc_cfg *fc = &dev->fc_cfg;
int rc;
- if (roc_nix_is_vf_or_sdp(&dev->nix))
+ if (roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix))
return 0;
/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
@@ -388,7 +388,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
struct cnxk_fc_cfg *fc = &dev->fc_cfg;
struct rte_eth_fc_conf fc_cfg = {0};
- if (roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix))
+ if (roc_nix_is_sdp(&dev->nix))
+ return 0;
+
+ /* Don't do anything if PFC is enabled */
+ if (dev->pfc_cfg.rx_pause_en || dev->pfc_cfg.tx_pause_en)
return 0;
fc_cfg.mode = fc->mode;
@@ -481,7 +485,6 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
sq->qid = qid;
sq->nb_desc = nb_desc;
sq->max_sqe_sz = nix_sq_max_sqe_sz(dev);
- sq->tc = ROC_NIX_PFC_CLASS_INVALID;
if (nix->tx_compl_ena) {
sq->cqid = sq->qid + dev->nb_rxq;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index a6ab493626..5df7927d7b 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -205,12 +205,15 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
struct rte_eth_fc_conf *fc_conf)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- enum rte_eth_fc_mode mode_map[] = {
- RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
- RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
- };
+ enum rte_eth_fc_mode mode_map[2][2] = {
+ [0][0] = RTE_ETH_FC_NONE,
+ [0][1] = RTE_ETH_FC_TX_PAUSE,
+ [1][0] = RTE_ETH_FC_RX_PAUSE,
+ [1][1] = RTE_ETH_FC_FULL,
+ };
struct roc_nix *nix = &dev->nix;
- int mode;
+ uint8_t rx_pause, tx_pause;
+ int mode, i;
if (roc_nix_is_sdp(nix))
return 0;
@@ -219,32 +222,25 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
if (mode < 0)
return mode;
+ rx_pause = (mode == ROC_NIX_FC_FULL) || (mode == ROC_NIX_FC_RX);
+ tx_pause = (mode == ROC_NIX_FC_FULL) || (mode == ROC_NIX_FC_TX);
+
+ /* Report flow control as disabled even if one RQ/SQ has it disabled */
+ for (i = 0; i < dev->nb_rxq; i++) {
+ if (dev->rqs[i].tc == ROC_NIX_PFC_CLASS_INVALID)
+ tx_pause = 0;
+ }
+
+ for (i = 0; i < dev->nb_txq; i++) {
+ if (dev->sqs[i].tc == ROC_NIX_PFC_CLASS_INVALID)
+ rx_pause = 0;
+ }
+
memset(fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- fc_conf->mode = mode_map[mode];
+ fc_conf->mode = mode_map[rx_pause][tx_pause];
return 0;
}
-static int
-nix_fc_cq_config_set(struct cnxk_eth_dev *dev, uint16_t qid, bool enable)
-{
- struct roc_nix *nix = &dev->nix;
- struct roc_nix_fc_cfg fc_cfg;
- struct roc_nix_cq *cq;
- struct roc_nix_rq *rq;
-
- memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg));
- rq = &dev->rqs[qid];
- cq = &dev->cqs[qid];
- fc_cfg.type = ROC_NIX_FC_RQ_CFG;
- fc_cfg.rq_cfg.enable = enable;
- fc_cfg.rq_cfg.tc = 0;
- fc_cfg.rq_cfg.rq = qid;
- fc_cfg.rq_cfg.pool = rq->aura_handle;
- fc_cfg.rq_cfg.cq_drop = cq->drop_thresh;
-
- return roc_nix_fc_config_set(nix, &fc_cfg);
-}
-
int
cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
struct rte_eth_fc_conf *fc_conf)
@@ -260,68 +256,90 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
struct cnxk_eth_rxq_sp *rxq;
struct cnxk_eth_txq_sp *txq;
uint8_t rx_pause, tx_pause;
+ struct roc_nix_sq *sq;
+ struct roc_nix_cq *cq;
+ struct roc_nix_rq *rq;
+ uint8_t tc;
int rc, i;
if (roc_nix_is_sdp(nix))
return 0;
+ if (dev->pfc_cfg.rx_pause_en || dev->pfc_cfg.tx_pause_en) {
+ plt_err("Disable PFC before configuring Flow Control");
+ return -ENOTSUP;
+ }
+
if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
plt_info("Only MODE configuration is supported");
return -EINVAL;
}
-
- rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
-
- if (fc_conf->mode == fc->mode) {
- fc->rx_pause = rx_pause;
- fc->tx_pause = tx_pause;
- return 0;
+ /* Disallow flow control changes when device is in started state */
+ if (data->dev_started) {
+ plt_info("Stop the port=%d for setting flow control", data->port_id);
+ return -EBUSY;
}
+ rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) || (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+ tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) || (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
+
/* Check if TX pause frame is already enabled or not */
- if (fc->tx_pause ^ tx_pause) {
- if (roc_model_is_cn96_ax() && data->dev_started) {
- /* On Ax, CQ should be in disabled state
- * while setting flow control configuration.
- */
- plt_info("Stop the port=%d for setting flow control",
- data->port_id);
- return 0;
- }
+ tc = tx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID;
+ for (i = 0; i < data->nb_rx_queues; i++) {
+ struct roc_nix_fc_cfg fc_cfg;
- for (i = 0; i < data->nb_rx_queues; i++) {
- struct roc_nix_fc_cfg fc_cfg;
+ /* Skip if RQ does not exist */
+ if (!data->rx_queues[i])
+ continue;
- memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg));
- rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[i]) -
- 1;
- rxq->tx_pause = !!tx_pause;
- rc = nix_fc_cq_config_set(dev, rxq->qid, !!tx_pause);
- if (rc)
- return rc;
- }
+ rxq = cnxk_eth_rxq_to_sp(data->rx_queues[i]);
+ rq = &dev->rqs[rxq->qid];
+ cq = &dev->cqs[rxq->qid];
+
+ /* Skip if RQ is in expected state */
+ if (fc->tx_pause == tx_pause && rq->tc == tc)
+ continue;
+
+ memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg));
+ fc_cfg.type = ROC_NIX_FC_RQ_CFG;
+ fc_cfg.rq_cfg.enable = !!tx_pause;
+ fc_cfg.rq_cfg.tc = 0;
+ fc_cfg.rq_cfg.rq = rq->qid;
+ fc_cfg.rq_cfg.pool = rq->aura_handle;
+ fc_cfg.rq_cfg.cq_drop = cq->drop_thresh;
+
+ rc = roc_nix_fc_config_set(nix, &fc_cfg);
+ if (rc)
+ return rc;
+ rxq->tx_pause = !!tx_pause;
}
/* Check if RX pause frame is enabled or not */
- if (fc->rx_pause ^ rx_pause) {
- for (i = 0; i < data->nb_tx_queues; i++) {
- struct roc_nix_fc_cfg fc_cfg;
-
- memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg));
- txq = ((struct cnxk_eth_txq_sp *)data->tx_queues[i]) -
- 1;
- fc_cfg.type = ROC_NIX_FC_TM_CFG;
- fc_cfg.tm_cfg.sq = txq->qid;
- fc_cfg.tm_cfg.enable = !!rx_pause;
- rc = roc_nix_fc_config_set(nix, &fc_cfg);
- if (rc)
- return rc;
- }
+ tc = rx_pause ? 0 : ROC_NIX_PFC_CLASS_INVALID;
+ for (i = 0; i < data->nb_tx_queues; i++) {
+ struct roc_nix_fc_cfg fc_cfg;
+
+ /* Skip if SQ does not exist */
+ if (!data->tx_queues[i])
+ continue;
+
+ txq = cnxk_eth_txq_to_sp(data->tx_queues[i]);
+ sq = &dev->sqs[txq->qid];
+
+ /* Skip if SQ is in expected state */
+ if (fc->rx_pause == rx_pause && sq->tc == tc)
+ continue;
+
+ memset(&fc_cfg, 0, sizeof(struct roc_nix_fc_cfg));
+ fc_cfg.type = ROC_NIX_FC_TM_CFG;
+ fc_cfg.tm_cfg.sq = txq->qid;
+ fc_cfg.tm_cfg.tc = 0;
+ fc_cfg.tm_cfg.enable = !!rx_pause;
+ rc = roc_nix_fc_config_set(nix, &fc_cfg);
+ if (rc && rc != EEXIST)
+ return rc;
}
rc = roc_nix_fc_mode_set(nix, mode_map[fc_conf->mode]);
@@ -350,6 +368,7 @@ cnxk_nix_priority_flow_ctrl_queue_config(struct rte_eth_dev *eth_dev,
struct rte_eth_pfc_queue_conf *pfc_conf)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
struct roc_nix *nix = &dev->nix;
enum rte_eth_fc_mode mode;
uint8_t en, tc;
@@ -366,6 +385,12 @@ cnxk_nix_priority_flow_ctrl_queue_config(struct rte_eth_dev *eth_dev,
return -ENOTSUP;
}
+ /* Disallow flow control changes when device is in started state */
+ if (data->dev_started) {
+ plt_info("Stop the port=%d for setting PFC", data->port_id);
+ return -EBUSY;
+ }
+
mode = pfc_conf->mode;
/* Perform Tx pause configuration on RQ */
@@ -1094,7 +1119,7 @@ nix_priority_flow_ctrl_rq_conf(struct rte_eth_dev *eth_dev, uint16_t qid,
enum roc_nix_fc_mode mode;
struct roc_nix_rq *rq;
struct roc_nix_cq *cq;
- int rc;
+ int rc, i;
if (roc_model_is_cn96_ax() && data->dev_started) {
/* On Ax, CQ should be in disabled state
@@ -1127,15 +1152,13 @@ nix_priority_flow_ctrl_rq_conf(struct rte_eth_dev *eth_dev, uint16_t qid,
if (rc)
return rc;
- if (rxq->tx_pause != tx_pause) {
- if (tx_pause)
- pfc->tx_pause_en++;
- else
- pfc->tx_pause_en--;
- }
-
rxq->tx_pause = !!tx_pause;
rxq->tc = tc;
+ /* Recheck number of RQ's that have PFC enabled */
+ pfc->tx_pause_en = 0;
+ for (i = 0; i < dev->nb_rxq; i++)
+ if (dev->rqs[i].tc != ROC_NIX_PFC_CLASS_INVALID)
+ pfc->tx_pause_en++;
/* Skip if PFC already enabled in mac */
if (pfc->tx_pause_en > 1)
@@ -1168,7 +1191,7 @@ nix_priority_flow_ctrl_sq_conf(struct rte_eth_dev *eth_dev, uint16_t qid,
struct cnxk_eth_txq_sp *txq;
enum roc_nix_fc_mode mode;
struct roc_nix_sq *sq;
- int rc;
+ int rc, i;
if (data->tx_queues == NULL)
return -EINVAL;
@@ -1212,18 +1235,11 @@ nix_priority_flow_ctrl_sq_conf(struct rte_eth_dev *eth_dev, uint16_t qid,
if (rc)
return rc;
- /* Maintaining a count for SQs which are configured for PFC. This is
- * required to handle disabling of a particular SQ without affecting
- * PFC on other SQs.
- */
- if (!fc_cfg.tm_cfg.enable && sq->tc != ROC_NIX_PFC_CLASS_INVALID) {
- sq->tc = ROC_NIX_PFC_CLASS_INVALID;
- pfc->rx_pause_en--;
- } else if (fc_cfg.tm_cfg.enable &&
- sq->tc == ROC_NIX_PFC_CLASS_INVALID) {
- sq->tc = tc;
- pfc->rx_pause_en++;
- }
+ /* Recheck number of SQ's that have PFC enabled */
+ pfc->rx_pause_en = 0;
+ for (i = 0; i < dev->nb_txq; i++)
+ if (dev->sqs[i].tc != ROC_NIX_PFC_CLASS_INVALID)
+ pfc->rx_pause_en++;
if (pfc->rx_pause_en > 1)
goto exit;
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 09/15] net/cnxk: don't allow PFC configuration on started port
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (6 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 08/15] net/cnxk: check flow control config per queue on dev start Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 10/15] net/cnxk: aura handle for fastpath Rx queues Nithin Dabilpuram
` (5 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Rahul Bhansali
From: Rahul Bhansali <rbhansali@marvell.com>
Avoid priority flow control configuration when port is
started.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev_ops.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 5df7927d7b..068b7c3502 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -1121,8 +1121,8 @@ nix_priority_flow_ctrl_rq_conf(struct rte_eth_dev *eth_dev, uint16_t qid,
struct roc_nix_cq *cq;
int rc, i;
- if (roc_model_is_cn96_ax() && data->dev_started) {
- /* On Ax, CQ should be in disabled state
+ if (data->dev_started) {
+ /* RQ should be in disabled state
* while setting flow control configuration.
*/
plt_info("Stop the port=%d for setting flow control",
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 10/15] net/cnxk: aura handle for fastpath Rx queues
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (7 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 09/15] net/cnxk: don't allow PFC configuration on started port Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 11/15] common/cnxk: support of per NIX LF meta aura Nithin Dabilpuram
` (4 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Rahul Bhansali
From: Rahul Bhansali <rbhansali@marvell.com>
Meta aura for RQs is created during queue enable process, so
aura handle for fastpath Rx queues should be updated after
this.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 33 ++++++++++++++++++++++++++-------
1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index cb88bd2dc1..2dbca698af 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -283,7 +283,6 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
struct rte_mempool *mp)
{
struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
- struct cnxk_eth_rxq_sp *rxq_sp;
struct cn10k_eth_rxq *rxq;
struct roc_nix_rq *rq;
struct roc_nix_cq *cq;
@@ -335,17 +334,34 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
rxq->lmt_base = dev->nix.lmt_base;
rxq->sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix,
dev->inb.inl_dev);
+ }
+
+ /* Lookup mem */
+ rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+ return 0;
+}
+
+static void
+cn10k_nix_rx_queue_meta_aura_update(struct rte_eth_dev *eth_dev)
+{
+ struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+ struct cnxk_eth_rxq_sp *rxq_sp;
+ struct cn10k_eth_rxq *rxq;
+ struct roc_nix_rq *rq;
+ int i;
+
+ /* Update Aura handle for fastpath rx queues */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rq = &dev->rqs[i];
+ rxq = eth_dev->data->rx_queues[i];
rxq->meta_aura = rq->meta_aura_handle;
- rxq_sp = cnxk_eth_rxq_to_sp(rxq);
/* Assume meta packet from normal aura if meta aura is not setup
*/
- if (!rxq->meta_aura)
+ if (!rxq->meta_aura) {
+ rxq_sp = cnxk_eth_rxq_to_sp(rxq);
rxq->meta_aura = rxq_sp->qconf.mp->pool_id;
+ }
}
-
- /* Lookup mem */
- rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
- return 0;
}
static int
@@ -557,6 +573,9 @@ cn10k_nix_dev_start(struct rte_eth_dev *eth_dev)
dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
+ if (dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)
+ cn10k_nix_rx_queue_meta_aura_update(eth_dev);
+
cn10k_eth_set_tx_function(eth_dev);
cn10k_eth_set_rx_function(eth_dev);
return 0;
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 11/15] common/cnxk: support of per NIX LF meta aura
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (8 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 10/15] net/cnxk: aura handle for fastpath Rx queues Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 12/15] common/cnxk: enable one to one SQ QINT mapping Nithin Dabilpuram
` (3 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Pavan Nikhilesh, Shijith Thotton
Cc: jerinj, dev, Rahul Bhansali
From: Rahul Bhansali <rbhansali@marvell.com>
Supports creation of individual meta aura per NIX port for
CN106-B0/CN103xx SoC.
Individual pool buffer size can be passed using meta_buf_sz
devargs parameter per NIX for local meta aura creation.
Signed-off-by: Rahul Bhansali <rbhansali@marvell.com>
---
doc/guides/nics/cnxk.rst | 14 ++
drivers/common/cnxk/roc_features.h | 6 +
drivers/common/cnxk/roc_nix.h | 5 +
drivers/common/cnxk/roc_nix_fc.c | 7 +-
drivers/common/cnxk/roc_nix_inl.c | 232 +++++++++++++++++++------
drivers/common/cnxk/roc_nix_inl.h | 7 +-
drivers/common/cnxk/roc_nix_queue.c | 6 +-
drivers/event/cnxk/cn10k_eventdev.c | 10 +-
drivers/event/cnxk/cn10k_worker.h | 11 +-
drivers/event/cnxk/cn9k_eventdev.c | 7 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 2 +-
drivers/event/cnxk/cnxk_tim_evdev.h | 2 +-
drivers/net/cnxk/cn10k_ethdev.c | 2 +
drivers/net/cnxk/cnxk_ethdev.c | 4 +
drivers/net/cnxk/cnxk_ethdev.h | 6 +-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 23 +++
drivers/net/cnxk/cnxk_ethdev_dp.h | 13 ++
drivers/net/cnxk/cnxk_ethdev_sec.c | 21 ++-
drivers/net/cnxk/cnxk_lookup.c | 37 +++-
19 files changed, 330 insertions(+), 85 deletions(-)
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 267010e760..9229056f6f 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -402,6 +402,20 @@ Runtime Config Options
-a 0002:01:00.1,tx_compl_ena=1
+- ``Meta buffer size per ethdev port for inline inbound IPsec second pass``
+
+ Size of meta buffer allocated for inline inbound IPsec second pass per
+ ethdev port can be specified by ``meta_buf_sz`` ``devargs`` parameter.
+ Default value is computed runtime based on pkt mbuf pools created and in use.
+ This option is for OCTEON CN106-B0/CN103XX SoC family.
+
+ For example::
+
+ -a 0002:02:00.0,meta_buf_sz=512
+
+ With the above configuration, PMD would allocate meta buffers of size 512 for
+ inline inbound IPsec processing second pass.
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/common/cnxk/roc_features.h b/drivers/common/cnxk/roc_features.h
index 27bccd6b9c..7796fef91b 100644
--- a/drivers/common/cnxk/roc_features.h
+++ b/drivers/common/cnxk/roc_features.h
@@ -16,6 +16,12 @@ roc_feature_nix_has_inl_rq_mask(void)
return (roc_model_is_cn10kb() || roc_model_is_cn10ka_b0());
}
+static inline bool
+roc_feature_nix_has_own_meta_aura(void)
+{
+ return (roc_model_is_cn10kb() || roc_model_is_cn10ka_b0());
+}
+
static inline bool
roc_feature_nix_has_late_bp(void)
{
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index f04dd63e27..0ec98ad630 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -434,12 +434,17 @@ struct roc_nix {
uint32_t dwrr_mtu;
bool ipsec_out_sso_pffunc;
bool custom_sa_action;
+ bool local_meta_aura_ena;
+ uint32_t meta_buf_sz;
/* End of input parameters */
/* LMT line base for "Per Core Tx LMT line" mode*/
uintptr_t lmt_base;
bool io_enabled;
bool rx_ptp_ena;
uint16_t cints;
+ uint32_t buf_sz;
+ uint64_t meta_aura_handle;
+ uintptr_t meta_mempool;
#define ROC_NIX_MEM_SZ (6 * 1056)
uint8_t reserved[ROC_NIX_MEM_SZ] __plt_cache_aligned;
diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index 7574a88bf6..cec83b31f3 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -295,11 +295,16 @@ nix_fc_rq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
if (sso_ena < 0)
return -EINVAL;
- if (sso_ena)
+ if (sso_ena) {
roc_nix_fc_npa_bp_cfg(roc_nix, fc_cfg->rq_cfg.pool,
fc_cfg->rq_cfg.enable, true,
fc_cfg->rq_cfg.tc);
+ if (roc_nix->local_meta_aura_ena)
+ roc_nix_fc_npa_bp_cfg(roc_nix, roc_nix->meta_aura_handle,
+ fc_cfg->rq_cfg.enable, true, fc_cfg->rq_cfg.tc);
+ }
+
/* Copy RQ config to CQ config as they are occupying same area */
memset(&tmp, 0, sizeof(tmp));
tmp.type = ROC_NIX_FC_CQ_CFG;
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index 19f500ee54..076d83e8d5 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -20,97 +20,134 @@ PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ ==
1UL << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2);
static int
-nix_inl_meta_aura_destroy(void)
+nix_inl_meta_aura_destroy(struct roc_nix *roc_nix)
{
struct idev_cfg *idev = idev_get_cfg();
struct idev_nix_inl_cfg *inl_cfg;
+ char mempool_name[24] = {'\0'};
+ char *mp_name = NULL;
+ uint64_t *meta_aura;
int rc;
if (!idev)
return -EINVAL;
inl_cfg = &idev->inl_cfg;
+ if (roc_nix->local_meta_aura_ena) {
+ meta_aura = &roc_nix->meta_aura_handle;
+ snprintf(mempool_name, sizeof(mempool_name), "NIX_INL_META_POOL_%d",
+ roc_nix->port_id + 1);
+ mp_name = mempool_name;
+ } else {
+ meta_aura = &inl_cfg->meta_aura;
+ }
+
/* Destroy existing Meta aura */
- if (inl_cfg->meta_aura) {
+ if (*meta_aura) {
uint64_t avail, limit;
/* Check if all buffers are back to pool */
- avail = roc_npa_aura_op_available(inl_cfg->meta_aura);
- limit = roc_npa_aura_op_limit_get(inl_cfg->meta_aura);
+ avail = roc_npa_aura_op_available(*meta_aura);
+ limit = roc_npa_aura_op_limit_get(*meta_aura);
if (avail != limit)
plt_warn("Not all buffers are back to meta pool,"
" %" PRIu64 " != %" PRIu64, avail, limit);
- rc = meta_pool_cb(&inl_cfg->meta_aura, 0, 0, true);
+ rc = meta_pool_cb(meta_aura, &roc_nix->meta_mempool, 0, 0, true, mp_name);
if (rc) {
plt_err("Failed to destroy meta aura, rc=%d", rc);
return rc;
}
- inl_cfg->meta_aura = 0;
- inl_cfg->buf_sz = 0;
- inl_cfg->nb_bufs = 0;
- inl_cfg->refs = 0;
+
+ if (!roc_nix->local_meta_aura_ena) {
+ inl_cfg->meta_aura = 0;
+ inl_cfg->buf_sz = 0;
+ inl_cfg->nb_bufs = 0;
+ } else
+ roc_nix->buf_sz = 0;
}
return 0;
}
static int
-nix_inl_meta_aura_create(struct idev_cfg *idev, uint16_t first_skip)
+nix_inl_meta_aura_create(struct idev_cfg *idev, struct roc_nix *roc_nix, uint16_t first_skip,
+ uint64_t *meta_aura)
{
uint64_t mask = BIT_ULL(ROC_NPA_BUF_TYPE_PACKET_IPSEC);
struct idev_nix_inl_cfg *inl_cfg;
struct nix_inl_dev *nix_inl_dev;
+ int port_id = roc_nix->port_id;
+ char mempool_name[24] = {'\0'};
+ struct roc_nix_rq *inl_rq;
uint32_t nb_bufs, buf_sz;
+ char *mp_name = NULL;
+ uint16_t inl_rq_id;
+ uintptr_t mp;
int rc;
inl_cfg = &idev->inl_cfg;
nix_inl_dev = idev->nix_inl_dev;
- /* Override meta buf count from devargs if present */
- if (nix_inl_dev && nix_inl_dev->nb_meta_bufs)
- nb_bufs = nix_inl_dev->nb_meta_bufs;
- else
- nb_bufs = roc_npa_buf_type_limit_get(mask);
-
- /* Override meta buf size from devargs if present */
- if (nix_inl_dev && nix_inl_dev->meta_buf_sz)
- buf_sz = nix_inl_dev->meta_buf_sz;
- else
- buf_sz = first_skip + NIX_INL_META_SIZE;
+ if (roc_nix->local_meta_aura_ena) {
+ /* Per LF Meta Aura */
+ inl_rq_id = nix_inl_dev->nb_rqs > 1 ? port_id : 0;
+ inl_rq = &nix_inl_dev->rqs[inl_rq_id];
+
+ nb_bufs = roc_npa_aura_op_limit_get(inl_rq->aura_handle);
+ if (inl_rq->spb_ena)
+ nb_bufs += roc_npa_aura_op_limit_get(inl_rq->spb_aura_handle);
+
+ /* Override meta buf size from NIX devargs if present */
+ if (roc_nix->meta_buf_sz)
+ buf_sz = roc_nix->meta_buf_sz;
+ else
+ buf_sz = first_skip + NIX_INL_META_SIZE;
+
+ /* Create Metapool name */
+ snprintf(mempool_name, sizeof(mempool_name), "NIX_INL_META_POOL_%d",
+ roc_nix->port_id + 1);
+ mp_name = mempool_name;
+ } else {
+ /* Global Meta Aura (Aura 0) */
+ /* Override meta buf count from devargs if present */
+ if (nix_inl_dev && nix_inl_dev->nb_meta_bufs)
+ nb_bufs = nix_inl_dev->nb_meta_bufs;
+ else
+ nb_bufs = roc_npa_buf_type_limit_get(mask);
+
+ /* Override meta buf size from devargs if present */
+ if (nix_inl_dev && nix_inl_dev->meta_buf_sz)
+ buf_sz = nix_inl_dev->meta_buf_sz;
+ else
+ buf_sz = first_skip + NIX_INL_META_SIZE;
+ }
/* Allocate meta aura */
- rc = meta_pool_cb(&inl_cfg->meta_aura, buf_sz, nb_bufs, false);
+ rc = meta_pool_cb(meta_aura, &mp, buf_sz, nb_bufs, false, mp_name);
if (rc) {
plt_err("Failed to allocate meta aura, rc=%d", rc);
return rc;
}
+ roc_nix->meta_mempool = mp;
+
+ if (!roc_nix->local_meta_aura_ena) {
+ inl_cfg->buf_sz = buf_sz;
+ inl_cfg->nb_bufs = nb_bufs;
+ } else
+ roc_nix->buf_sz = buf_sz;
- inl_cfg->buf_sz = buf_sz;
- inl_cfg->nb_bufs = nb_bufs;
return 0;
}
-int
-roc_nix_inl_meta_aura_check(struct roc_nix_rq *rq)
+static int
+nix_inl_global_meta_buffer_validate(struct idev_cfg *idev, struct roc_nix_rq *rq)
{
- struct idev_cfg *idev = idev_get_cfg();
struct idev_nix_inl_cfg *inl_cfg;
uint32_t actual, expected;
uint64_t mask, type_mask;
- int rc;
- if (!idev || !meta_pool_cb)
- return -EFAULT;
inl_cfg = &idev->inl_cfg;
-
- /* Create meta aura if not present */
- if (!inl_cfg->meta_aura) {
- rc = nix_inl_meta_aura_create(idev, rq->first_skip);
- if (rc)
- return rc;
- }
-
/* Validate if we have enough meta buffers */
mask = BIT_ULL(ROC_NPA_BUF_TYPE_PACKET_IPSEC);
expected = roc_npa_buf_type_limit_get(mask);
@@ -145,7 +182,7 @@ roc_nix_inl_meta_aura_check(struct roc_nix_rq *rq)
expected = roc_npa_buf_type_limit_get(mask);
if (actual < expected) {
- plt_err("VWQE aura shared b/w Inline inbound and non-Inline inbound "
+ plt_err("VWQE aura shared b/w Inline inbound and non-Inline "
"ports needs vwqe bufs(%u) minimum of all pkt bufs (%u)",
actual, expected);
return -EIO;
@@ -164,6 +201,71 @@ roc_nix_inl_meta_aura_check(struct roc_nix_rq *rq)
}
}
}
+ return 0;
+}
+
+static int
+nix_inl_local_meta_buffer_validate(struct roc_nix *roc_nix, struct roc_nix_rq *rq)
+{
+ /* Validate if we have enough space for meta buffer */
+ if (roc_nix->buf_sz && (rq->first_skip + NIX_INL_META_SIZE > roc_nix->buf_sz)) {
+ plt_err("Meta buffer size %u not sufficient to meet RQ first skip %u",
+ roc_nix->buf_sz, rq->first_skip);
+ return -EIO;
+ }
+
+ /* TODO: Validate VWQE buffers */
+
+ return 0;
+}
+
+int
+roc_nix_inl_meta_aura_check(struct roc_nix *roc_nix, struct roc_nix_rq *rq)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct idev_cfg *idev = idev_get_cfg();
+ struct idev_nix_inl_cfg *inl_cfg;
+ bool aura_setup = false;
+ uint64_t *meta_aura;
+ int rc;
+
+ if (!idev || !meta_pool_cb)
+ return -EFAULT;
+
+ inl_cfg = &idev->inl_cfg;
+
+ /* Create meta aura if not present */
+ if (roc_nix->local_meta_aura_ena)
+ meta_aura = &roc_nix->meta_aura_handle;
+ else
+ meta_aura = &inl_cfg->meta_aura;
+
+ if (!(*meta_aura)) {
+ rc = nix_inl_meta_aura_create(idev, roc_nix, rq->first_skip, meta_aura);
+ if (rc)
+ return rc;
+
+ aura_setup = true;
+ }
+ /* Update rq meta aura handle */
+ rq->meta_aura_handle = *meta_aura;
+
+ if (roc_nix->local_meta_aura_ena) {
+ rc = nix_inl_local_meta_buffer_validate(roc_nix, rq);
+ if (rc)
+ return rc;
+
+ /* Check for TC config on RQ 0 when local meta aura is used as
+ * inline meta aura creation is delayed.
+ */
+ if (aura_setup && nix->rqs[0] && nix->rqs[0]->tc != ROC_NIX_PFC_CLASS_INVALID)
+ roc_nix_fc_npa_bp_cfg(roc_nix, roc_nix->meta_aura_handle,
+ true, true, nix->rqs[0]->tc);
+ } else {
+ rc = nix_inl_global_meta_buffer_validate(idev, rq);
+ if (rc)
+ return rc;
+ }
return 0;
}
@@ -426,6 +528,7 @@ nix_inl_rq_mask_cfg(struct roc_nix *roc_nix, bool enable)
struct idev_nix_inl_cfg *inl_cfg;
uint64_t aura_handle;
int rc = -ENOSPC;
+ uint32_t buf_sz;
int i;
if (!idev)
@@ -473,10 +576,21 @@ nix_inl_rq_mask_cfg(struct roc_nix *roc_nix, bool enable)
msk_req->rq_mask.xqe_drop_ena = 0;
msk_req->rq_mask.spb_ena = 0;
- aura_handle = roc_npa_zero_aura_handle();
+ if (roc_nix->local_meta_aura_ena) {
+ aura_handle = roc_nix->meta_aura_handle;
+ buf_sz = roc_nix->buf_sz;
+ if (!aura_handle && enable) {
+ plt_err("NULL meta aura handle");
+ goto exit;
+ }
+ } else {
+ aura_handle = roc_npa_zero_aura_handle();
+ buf_sz = inl_cfg->buf_sz;
+ }
+
msk_req->ipsec_cfg1.spb_cpt_aura = roc_npa_aura_handle_to_aura(aura_handle);
msk_req->ipsec_cfg1.rq_mask_enable = enable;
- msk_req->ipsec_cfg1.spb_cpt_sizem1 = (inl_cfg->buf_sz >> 7) - 1;
+ msk_req->ipsec_cfg1.spb_cpt_sizem1 = (buf_sz >> 7) - 1;
msk_req->ipsec_cfg1.spb_cpt_enable = enable;
rc = mbox_process(mbox);
@@ -539,7 +653,8 @@ roc_nix_inl_inb_init(struct roc_nix *roc_nix)
if (!roc_model_is_cn9k() && !roc_errata_nix_no_meta_aura()) {
nix->need_meta_aura = true;
- idev->inl_cfg.refs++;
+ if (!roc_nix->local_meta_aura_ena)
+ idev->inl_cfg.refs++;
}
nix->inl_inb_ena = true;
@@ -562,9 +677,13 @@ roc_nix_inl_inb_fini(struct roc_nix *roc_nix)
nix->inl_inb_ena = false;
if (nix->need_meta_aura) {
nix->need_meta_aura = false;
- idev->inl_cfg.refs--;
- if (!idev->inl_cfg.refs)
- nix_inl_meta_aura_destroy();
+ if (roc_nix->local_meta_aura_ena) {
+ nix_inl_meta_aura_destroy(roc_nix);
+ } else {
+ idev->inl_cfg.refs--;
+ if (!idev->inl_cfg.refs)
+ nix_inl_meta_aura_destroy(roc_nix);
+ }
}
if (roc_feature_nix_has_inl_rq_mask()) {
@@ -968,7 +1087,7 @@ roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq, bool enable)
/* Check meta aura */
if (enable && nix->need_meta_aura) {
- rc = roc_nix_inl_meta_aura_check(rq);
+ rc = roc_nix_inl_meta_aura_check(rq->roc_nix, rq);
if (rc)
return rc;
}
@@ -1058,7 +1177,7 @@ roc_nix_inl_rq_ena_dis(struct roc_nix *roc_nix, bool enable)
return rc;
if (enable && nix->need_meta_aura)
- return roc_nix_inl_meta_aura_check(inl_rq);
+ return roc_nix_inl_meta_aura_check(roc_nix, inl_rq);
}
return 0;
}
@@ -1084,15 +1203,22 @@ roc_nix_inl_inb_set(struct roc_nix *roc_nix, bool ena)
* managed outside RoC.
*/
nix->inl_inb_ena = ena;
- if (!roc_model_is_cn9k() && !roc_errata_nix_no_meta_aura()) {
- if (ena) {
- nix->need_meta_aura = true;
+
+ if (roc_model_is_cn9k() || roc_errata_nix_no_meta_aura())
+ return;
+
+ if (ena) {
+ nix->need_meta_aura = true;
+ if (!roc_nix->local_meta_aura_ena)
idev->inl_cfg.refs++;
- } else if (nix->need_meta_aura) {
- nix->need_meta_aura = false;
+ } else if (nix->need_meta_aura) {
+ nix->need_meta_aura = false;
+ if (roc_nix->local_meta_aura_ena) {
+ nix_inl_meta_aura_destroy(roc_nix);
+ } else {
idev->inl_cfg.refs--;
if (!idev->inl_cfg.refs)
- nix_inl_meta_aura_destroy();
+ nix_inl_meta_aura_destroy(roc_nix);
}
}
}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 105a9e4ec4..6220ba6773 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -118,8 +118,9 @@ roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(void *sa)
typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args,
uint32_t soft_exp_event);
-typedef int (*roc_nix_inl_meta_pool_cb_t)(uint64_t *aura_handle, uint32_t blk_sz, uint32_t nb_bufs,
- bool destroy);
+typedef int (*roc_nix_inl_meta_pool_cb_t)(uint64_t *aura_handle, uintptr_t *mpool,
+ uint32_t blk_sz, uint32_t nb_bufs, bool destroy,
+ const char *mempool_name);
struct roc_nix_inl_dev {
/* Input parameters */
@@ -181,7 +182,7 @@ int __roc_api roc_nix_reassembly_configure(uint32_t max_wait_time,
int __roc_api roc_nix_inl_ts_pkind_set(struct roc_nix *roc_nix, bool ts_ena,
bool inb_inl_dev);
int __roc_api roc_nix_inl_rq_ena_dis(struct roc_nix *roc_nix, bool ena);
-int __roc_api roc_nix_inl_meta_aura_check(struct roc_nix_rq *rq);
+int __roc_api roc_nix_inl_meta_aura_check(struct roc_nix *roc_nix, struct roc_nix_rq *rq);
/* NIX Inline Outbound API */
int __roc_api roc_nix_inl_outb_init(struct roc_nix *roc_nix);
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 33b2cdf90f..464ee0b984 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -102,7 +102,7 @@ roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable)
/* Check for meta aura if RQ is enabled */
if (enable && nix->need_meta_aura)
- rc = roc_nix_inl_meta_aura_check(rq);
+ rc = roc_nix_inl_meta_aura_check(rq->roc_nix, rq);
return rc;
}
@@ -691,7 +691,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
/* Check for meta aura if RQ is enabled */
if (ena && nix->need_meta_aura) {
- rc = roc_nix_inl_meta_aura_check(rq);
+ rc = roc_nix_inl_meta_aura_check(roc_nix, rq);
if (rc)
return rc;
}
@@ -745,7 +745,7 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
/* Check for meta aura if RQ is enabled */
if (ena && nix->need_meta_aura) {
- rc = roc_nix_inl_meta_aura_check(rq);
+ rc = roc_nix_inl_meta_aura_check(roc_nix, rq);
if (rc)
return rc;
}
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 8e74edff55..b1cf43ee57 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -843,7 +843,7 @@ cn10k_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
}
static void
-cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem, uint64_t meta_aura)
+cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
int i;
@@ -855,8 +855,6 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem, u
ws->tstamp = dev->tstamp;
if (lookup_mem)
ws->lookup_mem = lookup_mem;
- if (meta_aura)
- ws->meta_aura = meta_aura;
}
}
@@ -867,7 +865,6 @@ cn10k_sso_rx_adapter_queue_add(
const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
{
struct cn10k_eth_rxq *rxq;
- uint64_t meta_aura;
void *lookup_mem;
int rc;
@@ -881,8 +878,7 @@ cn10k_sso_rx_adapter_queue_add(
return -EINVAL;
rxq = eth_dev->data->rx_queues[0];
lookup_mem = rxq->lookup_mem;
- meta_aura = rxq->meta_aura;
- cn10k_sso_set_priv_mem(event_dev, lookup_mem, meta_aura);
+ cn10k_sso_set_priv_mem(event_dev, lookup_mem);
cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
return 0;
@@ -1056,7 +1052,7 @@ cn10k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev,
cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
ret = cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id, conf);
- cn10k_sso_set_priv_mem(event_dev, NULL, 0);
+ cn10k_sso_set_priv_mem(event_dev, NULL);
return ret;
}
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 2bea1f6ca6..06c71c6092 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -55,9 +55,10 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc
struct cnxk_timesync_info *tstamp = ws->tstamp[port_id];
void *lookup_mem = ws->lookup_mem;
uintptr_t lbase = ws->lmt_base;
+ uint64_t meta_aura = 0, laddr;
struct rte_event_vector *vec;
- uint64_t meta_aura, laddr;
uint16_t nb_mbufs, non_vec;
+ struct rte_mempool *mp;
uint16_t lmt_id, d_off;
struct rte_mbuf **wqe;
struct rte_mbuf *mbuf;
@@ -77,7 +78,12 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc
if (flags & NIX_RX_OFFLOAD_TSTAMP_F && tstamp)
mbuf_init |= 8;
- meta_aura = ws->meta_aura;
+ if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+ mp = (struct rte_mempool *)cnxk_nix_inl_metapool_get(port_id, lookup_mem);
+ if (mp)
+ meta_aura = mp->pool_id;
+ }
+
nb_mbufs = RTE_ALIGN_FLOOR(vec->nb_elem, NIX_DESCS_PER_LOOP);
nb_mbufs = cn10k_nix_recv_pkts_vector(&mbuf_init, wqe, nb_mbufs,
flags | NIX_RX_VWQE_F,
@@ -94,7 +100,6 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags, struc
/* Pick first mbuf's aura handle assuming all
* mbufs are from a vec and are from same RQ.
*/
- meta_aura = ws->meta_aura;
if (!meta_aura)
meta_aura = mbuf->pool->pool_id;
ROC_LMT_BASE_ID_GET(lbase, lmt_id);
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 131d42a95b..7e8339bd3a 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -945,8 +945,7 @@ cn9k_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
}
static void
-cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem,
- uint64_t aura __rte_unused)
+cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
{
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
int i;
@@ -992,7 +991,7 @@ cn9k_sso_rx_adapter_queue_add(
rxq = eth_dev->data->rx_queues[0];
lookup_mem = rxq->lookup_mem;
- cn9k_sso_set_priv_mem(event_dev, lookup_mem, 0);
+ cn9k_sso_set_priv_mem(event_dev, lookup_mem);
cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
return 0;
@@ -1141,7 +1140,7 @@ cn9k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev,
cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
ret = cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id, conf);
- cn9k_sso_set_priv_mem(event_dev, NULL, 0);
+ cn9k_sso_set_priv_mem(event_dev, NULL);
return ret;
}
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index fac3806e14..121480df15 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -265,7 +265,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring,
RTE_EVENT_TYPE_TIMER);
cnxk_sso_xae_reconfigure(dev->event_dev);
- sso_set_priv_mem_fn(dev->event_dev, NULL, 0);
+ sso_set_priv_mem_fn(dev->event_dev, NULL);
plt_tim_dbg(
"Total memory used %" PRIu64 "MB\n",
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h
index 7253a37d3d..3a0b036cb4 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.h
+++ b/drivers/event/cnxk/cnxk_tim_evdev.h
@@ -81,7 +81,7 @@
(TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
typedef void (*cnxk_sso_set_priv_mem_t)(const struct rte_eventdev *event_dev,
- void *lookup_mem, uint64_t aura);
+ void *lookup_mem);
struct cnxk_tim_ctl {
uint16_t ring;
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 2dbca698af..019c8299ce 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -362,6 +362,8 @@ cn10k_nix_rx_queue_meta_aura_update(struct rte_eth_dev *eth_dev)
rxq->meta_aura = rxq_sp->qconf.mp->pool_id;
}
}
+ /* Store mempool in lookup mem */
+ cnxk_nix_lookup_mem_metapool_set(dev);
}
static int
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index d8ccd307a8..1cae3084e1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -275,6 +275,8 @@ nix_security_release(struct cnxk_eth_dev *dev)
plt_err("Failed to cleanup nix inline inb, rc=%d", rc);
ret |= rc;
+ cnxk_nix_lookup_mem_metapool_clear(dev);
+
if (dev->inb.sa_dptr) {
plt_free(dev->inb.sa_dptr);
dev->inb.sa_dptr = NULL;
@@ -1852,6 +1854,8 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
nix->pci_dev = pci_dev;
nix->hw_vlan_ins = true;
nix->port_id = eth_dev->data->port_id;
+ if (roc_feature_nix_has_own_meta_aura())
+ nix->local_meta_aura_ena = true;
rc = roc_nix_dev_init(nix);
if (rc) {
plt_err("Failed to initialize roc nix rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index f0eab4244c..12c56ccd55 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -594,6 +594,8 @@ int cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p,
int cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx);
int cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev);
int cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev);
+int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
+int cnxk_nix_lookup_mem_metapool_clear(struct cnxk_eth_dev *dev);
__rte_internal
int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);
struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
@@ -601,8 +603,8 @@ struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
struct cnxk_eth_sec_sess *
cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
struct rte_security_session *sess);
-int cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bufs,
- bool destroy);
+int cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_sz,
+ uint32_t nb_bufs, bool destroy, const char *mempool_name);
/* Congestion Management */
int cnxk_nix_cman_info_get(struct rte_eth_dev *dev, struct rte_eth_cman_info *info);
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index dbf5bd847d..e1a0845ece 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -182,6 +182,22 @@ parse_sqb_count(const char *key, const char *value, void *extra_args)
return 0;
}
+static int
+parse_meta_bufsize(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint32_t val;
+
+ errno = 0;
+ val = strtoul(value, NULL, 0);
+ if (errno)
+ val = 0;
+
+ *(uint32_t *)extra_args = val;
+
+ return 0;
+}
+
static int
parse_switch_header_type(const char *key, const char *value, void *extra_args)
{
@@ -248,6 +264,7 @@ parse_sdp_channel_mask(const char *key, const char *value, void *extra_args)
#define CNXK_FLOW_PRE_L2_INFO "flow_pre_l2_info"
#define CNXK_CUSTOM_SA_ACT "custom_sa_act"
#define CNXK_SQB_SLACK "sqb_slack"
+#define CNXK_NIX_META_BUF_SZ "meta_buf_sz"
int
cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
@@ -270,6 +287,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
uint16_t tx_compl_ena = 0;
uint16_t custom_sa_act = 0;
struct rte_kvargs *kvlist;
+ uint32_t meta_buf_sz = 0;
uint16_t no_inl_dev = 0;
uint8_t lock_rx_ctx = 0;
@@ -319,6 +337,7 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
&custom_sa_act);
rte_kvargs_process(kvlist, CNXK_SQB_SLACK, &parse_sqb_count,
&sqb_slack);
+ rte_kvargs_process(kvlist, CNXK_NIX_META_BUF_SZ, &parse_meta_bufsize, &meta_buf_sz);
rte_kvargs_free(kvlist);
null_devargs:
@@ -337,6 +356,10 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
dev->nix.lock_rx_ctx = lock_rx_ctx;
dev->nix.custom_sa_action = custom_sa_act;
dev->nix.sqb_slack = sqb_slack;
+
+ if (roc_feature_nix_has_own_meta_aura())
+ dev->nix.meta_buf_sz = meta_buf_sz;
+
dev->npc.flow_prealloc_size = flow_prealloc_size;
dev->npc.flow_max_priority = flow_max_priority;
dev->npc.switch_header_type = switch_header_type;
diff --git a/drivers/net/cnxk/cnxk_ethdev_dp.h b/drivers/net/cnxk/cnxk_ethdev_dp.h
index a812c78eda..c1f99a2616 100644
--- a/drivers/net/cnxk/cnxk_ethdev_dp.h
+++ b/drivers/net/cnxk/cnxk_ethdev_dp.h
@@ -34,6 +34,9 @@
#define ERRCODE_ERRLEN_WIDTH 12
#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
+#define SA_BASE_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uintptr_t))
+#define MEMPOOL_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uintptr_t))
+
#define CNXK_NIX_UDP_TUN_BITMASK \
((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
(1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
@@ -164,4 +167,14 @@ cnxk_nix_sa_base_get(uint16_t port, const void *lookup_mem)
return *((const uintptr_t *)sa_base_tbl + port);
}
+static __rte_always_inline uintptr_t
+cnxk_nix_inl_metapool_get(uint16_t port, const void *lookup_mem)
+{
+ uintptr_t metapool_tbl;
+
+ metapool_tbl = (uintptr_t)lookup_mem;
+ metapool_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ;
+ return *((const uintptr_t *)metapool_tbl + port);
+}
+
#endif /* __CNXK_ETHDEV_DP_H__ */
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index 6c71f9554b..aa8a378a00 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -38,15 +38,22 @@ bitmap_ctzll(uint64_t slab)
}
int
-cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bufs, bool destroy)
+cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uintptr_t *mpool, uint32_t buf_sz,
+ uint32_t nb_bufs, bool destroy, const char *mempool_name)
{
- const char *mp_name = CNXK_NIX_INL_META_POOL_NAME;
+ const char *mp_name = NULL;
struct rte_pktmbuf_pool_private mbp_priv;
struct npa_aura_s *aura;
struct rte_mempool *mp;
uint16_t first_skip;
int rc;
+ /* Null Mempool name indicates to allocate Zero aura. */
+ if (!mempool_name)
+ mp_name = CNXK_NIX_INL_META_POOL_NAME;
+ else
+ mp_name = mempool_name;
+
/* Destroy the mempool if requested */
if (destroy) {
mp = rte_mempool_lookup(mp_name);
@@ -62,6 +69,7 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bu
rte_mempool_free(mp);
*aura_handle = 0;
+ *mpool = 0;
return 0;
}
@@ -83,10 +91,12 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bu
goto free_mp;
}
aura->ena = 1;
- aura->pool_addr = 0x0;
+ if (!mempool_name)
+ aura->pool_addr = 0;
+ else
+ aura->pool_addr = 1; /* Any non zero value, so that alloc from next free Index */
- rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(),
- aura);
+ rc = rte_mempool_set_ops_byname(mp, rte_mbuf_platform_mempool_ops(), aura);
if (rc) {
plt_err("Failed to setup mempool ops for meta, rc=%d", rc);
goto free_aura;
@@ -108,6 +118,7 @@ cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bu
rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL);
*aura_handle = mp->pool_id;
+ *mpool = (uintptr_t)mp;
return 0;
free_aura:
plt_free(aura);
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index 6d561f194f..c0a7129a9c 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -7,8 +7,7 @@
#include "cnxk_ethdev.h"
-#define SA_BASE_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uintptr_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ)
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ + MEMPOOL_TBL_SZ)
const uint32_t *
cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
{
@@ -371,3 +370,37 @@ cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev)
*((uintptr_t *)sa_base_tbl + port) = 0;
return 0;
}
+
+int
+cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev)
+{
+ void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+ uint16_t port = dev->eth_dev->data->port_id;
+ uintptr_t mp_tbl;
+
+ if (!lookup_mem)
+ return -EIO;
+
+ /* Set Mempool in lookup mem */
+ mp_tbl = (uintptr_t)lookup_mem;
+ mp_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ;
+ *((uintptr_t *)mp_tbl + port) = dev->nix.meta_mempool;
+ return 0;
+}
+
+int
+cnxk_nix_lookup_mem_metapool_clear(struct cnxk_eth_dev *dev)
+{
+ void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+ uint16_t port = dev->eth_dev->data->port_id;
+ uintptr_t mp_tbl;
+
+ if (!lookup_mem)
+ return -EIO;
+
+ /* Clear Mempool in lookup mem */
+ mp_tbl = (uintptr_t)lookup_mem;
+ mp_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ;
+ *((uintptr_t *)mp_tbl + port) = dev->nix.meta_mempool;
+ return 0;
+}
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 12/15] common/cnxk: enable one to one SQ QINT mapping
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (9 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 11/15] common/cnxk: support of per NIX LF meta aura Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 13/15] common/cnxk: add RSS error messages on mbox failure Nithin Dabilpuram
` (2 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Harman Kalra
From: Harman Kalra <hkalra@marvell.com>
Enabling one to one mapping between SQ to 64 QINTs per LF. So in case of
SQ interrupt event NIX can deliver associated QINT MSI-X interrupt to
software and increment respective QINT count CSR.
While for some cn10k chip models keeping the workaround (i.e. all SQs
assigned to same QINT index), for errata where NIX may use an incorrect
QINT_IDX for SQ interrupts.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/common/cnxk/roc_errata.h | 8 ++++++++
drivers/common/cnxk/roc_nix_queue.c | 21 +++++++++++----------
2 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/drivers/common/cnxk/roc_errata.h b/drivers/common/cnxk/roc_errata.h
index 36e6db467a..356f9ca626 100644
--- a/drivers/common/cnxk/roc_errata.h
+++ b/drivers/common/cnxk/roc_errata.h
@@ -98,4 +98,12 @@ roc_errata_nix_sdp_send_has_mtu_size_16k(void)
roc_model_is_cn96_a0() || roc_model_is_cn96_b0());
}
+/* Errata IPBUNIXTX-39300 */
+static inline bool
+roc_errata_nix_assign_incorrect_qint(void)
+{
+ return (roc_model_is_cn10ka_a0() || roc_model_is_cnf10ka_a0() ||
+ roc_model_is_cnf10kb_a0() || roc_model_is_cn10ka_a1());
+}
+
#endif /* _ROC_ERRATA_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 464ee0b984..21bfe7d498 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -1103,11 +1103,8 @@ sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
/* Many to one reduction */
- /* Assigning QINT 0 to all the SQs, an errata exists where NIXTX can
- * send incorrect QINT_IDX when reporting queue interrupt (QINT). This
- * might result in software missing the interrupt.
- */
- aq->sq.qint_idx = 0;
+ aq->sq.qint_idx = sq->qid % nix->qints;
+
return 0;
}
@@ -1237,11 +1234,15 @@ sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum, uint16_t sm
aq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
- /* Assigning QINT 0 to all the SQs, an errata exists where NIXTX can
- * send incorrect QINT_IDX when reporting queue interrupt (QINT). This
- * might result in software missing the interrupt.
- */
- aq->sq.qint_idx = 0;
+ /* Many to one reduction */
+ aq->sq.qint_idx = sq->qid % nix->qints;
+ if (roc_errata_nix_assign_incorrect_qint()) {
+ /* Assigning QINT 0 to all the SQs, an errata exists where NIXTX can
+ * send incorrect QINT_IDX when reporting queue interrupt (QINT). This
+ * might result in software missing the interrupt.
+ */
+ aq->sq.qint_idx = 0;
+ }
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 13/15] common/cnxk: add RSS error messages on mbox failure
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (10 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 12/15] common/cnxk: enable one to one SQ QINT mapping Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 14/15] common/cnxk: add memory clobber to steor and ldeor Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 15/15] common/cnxk: enable SDP channel backpressure to TL4 Nithin Dabilpuram
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Thomas Monjalon, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Hiral
From: Hiral <hshah@marvell.com>
Clarifying otx2_process_msgs() RSS error messages
Signed-off-by: Hiral <hshah@marvell.com>
---
.mailmap | 1 +
drivers/common/cnxk/roc_dev.c | 4 ++--
drivers/common/cnxk/roc_utils.c | 6 ++++++
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/.mailmap b/.mailmap
index 5015494210..ef4c4c36a2 100644
--- a/.mailmap
+++ b/.mailmap
@@ -487,6 +487,7 @@ Herbert Guan <herbert.guan@arm.com>
Hernan Vargas <hernan.vargas@intel.com>
Herry Chen <herry.chen@broadcom.com>
Hideyuki Yamashita <yamashita.hideyuki@po.ntt-tx.co.jp>
+Hiral <hshah@marvell.com>
Hiroki Shirokura <slank.dev@gmail.com>
Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
Hiroyuki Mikita <h.mikita89@gmail.com>
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 826887a97b..2388237186 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -441,8 +441,8 @@ process_msgs(struct dev *dev, struct mbox *mbox)
default:
if (msg->rc)
- plt_err("Message (%s) response has err=%d",
- mbox_id2name(msg->id), msg->rc);
+ plt_err("Message (%s) response has err=%d (%s)",
+ mbox_id2name(msg->id), msg->rc, roc_error_msg_get(msg->rc));
break;
}
offset = mbox->rx_start + msg->next_msgoff;
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
index 495e62a315..fe291fce96 100644
--- a/drivers/common/cnxk/roc_utils.c
+++ b/drivers/common/cnxk/roc_utils.c
@@ -229,6 +229,12 @@ roc_error_msg_get(int errorcode)
case UTIL_ERR_INVALID_MODEL:
err_msg = "Invalid RoC model";
break;
+ case NIX_AF_ERR_RSS_NOSPC_FIELD:
+ err_msg = "No space or unsupported fields";
+ break;
+ case NIX_AF_ERR_RSS_NOSPC_ALGO:
+ err_msg = "No space to add new flow hash algo";
+ break;
default:
/**
* Handle general error (as defined in linux errno.h)
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 14/15] common/cnxk: add memory clobber to steor and ldeor
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (11 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 13/15] common/cnxk: add RSS error messages on mbox failure Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 15/15] common/cnxk: enable SDP channel backpressure to TL4 Nithin Dabilpuram
13 siblings, 0 replies; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: jerinj, dev
To avoid compiler reordering stores to LMT line and ldeor,
add clobber attribute to ldeor, steor etc.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_io.h | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/common/cnxk/roc_io.h b/drivers/common/cnxk/roc_io.h
index 1e5c1f8c04..af1a10cd66 100644
--- a/drivers/common/cnxk/roc_io.h
+++ b/drivers/common/cnxk/roc_io.h
@@ -130,7 +130,8 @@ roc_lmt_submit_ldeor(plt_iova_t io_address)
asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldeor xzr, %x[rf], [%[rs]]"
: [rf] "=r"(result)
- : [rs] "r"(io_address));
+ : [rs] "r"(io_address)
+ : "memory");
return result;
}
@@ -141,7 +142,8 @@ roc_lmt_submit_ldeorl(plt_iova_t io_address)
asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldeorl xzr,%x[rf],[%[rs]]"
: [rf] "=r"(result)
- : [rs] "r"(io_address));
+ : [rs] "r"(io_address)
+ : "memory");
return result;
}
@@ -150,7 +152,8 @@ roc_lmt_submit_steor(uint64_t data, plt_iova_t io_address)
{
asm volatile(PLT_CPU_FEATURE_PREAMBLE
"steor %x[d], [%[rs]]" ::[d] "r"(data),
- [rs] "r"(io_address));
+ [rs] "r"(io_address)
+ : "memory");
}
static __plt_always_inline void
@@ -158,7 +161,8 @@ roc_lmt_submit_steorl(uint64_t data, plt_iova_t io_address)
{
asm volatile(PLT_CPU_FEATURE_PREAMBLE
"steorl %x[d], [%[rs]]" ::[d] "r"(data),
- [rs] "r"(io_address));
+ [rs] "r"(io_address)
+ : "memory");
}
static __plt_always_inline void
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 15/15] common/cnxk: enable SDP channel backpressure to TL4
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
` (12 preceding siblings ...)
2023-03-03 8:10 ` [PATCH 14/15] common/cnxk: add memory clobber to steor and ldeor Nithin Dabilpuram
@ 2023-03-03 8:10 ` Nithin Dabilpuram
2023-03-06 9:55 ` Jerin Jacob
13 siblings, 1 reply; 16+ messages in thread
From: Nithin Dabilpuram @ 2023-03-03 8:10 UTC (permalink / raw)
To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
Cc: jerinj, dev, Veerasenareddy Burru
From: Veerasenareddy Burru <vburru@marvell.com>
Configure TL4 to respond to SDP channel backpressure.
Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
---
drivers/common/cnxk/roc_nix_tm_utils.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index a7ba2bf027..5864833109 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -582,8 +582,12 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
/* Configure TL4 to send to SDP channel instead of CGX/LBK */
if (nix->sdp_link) {
+ plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u\n", relchan, schq,
+ nix->tx_chan_cnt);
reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
regval[k] = BIT_ULL(12);
+ regval[k] |= BIT_ULL(13);
+ regval[k] |= relchan;
k++;
}
break;
--
2.25.1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 15/15] common/cnxk: enable SDP channel backpressure to TL4
2023-03-03 8:10 ` [PATCH 15/15] common/cnxk: enable SDP channel backpressure to TL4 Nithin Dabilpuram
@ 2023-03-06 9:55 ` Jerin Jacob
0 siblings, 0 replies; 16+ messages in thread
From: Jerin Jacob @ 2023-03-06 9:55 UTC (permalink / raw)
To: Nithin Dabilpuram
Cc: Kiran Kumar K, Sunil Kumar Kori, Satha Rao, jerinj, dev,
Veerasenareddy Burru
On Fri, Mar 3, 2023 at 1:42 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> From: Veerasenareddy Burru <vburru@marvell.com>
>
> Configure TL4 to respond to SDP channel backpressure.
>
> Signed-off-by: Veerasenareddy Burru <vburru@marvell.com>
Changed git commit logs and added Fixes: as needed.
Series applied to dpdk-next-net-mrvl/for-next-net. Thanks
> ---
> drivers/common/cnxk/roc_nix_tm_utils.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
> index a7ba2bf027..5864833109 100644
> --- a/drivers/common/cnxk/roc_nix_tm_utils.c
> +++ b/drivers/common/cnxk/roc_nix_tm_utils.c
> @@ -582,8 +582,12 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
>
> /* Configure TL4 to send to SDP channel instead of CGX/LBK */
> if (nix->sdp_link) {
> + plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u\n", relchan, schq,
> + nix->tx_chan_cnt);
> reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
> regval[k] = BIT_ULL(12);
> + regval[k] |= BIT_ULL(13);
> + regval[k] |= relchan;
> k++;
> }
> break;
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-03-06 9:55 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-03 8:09 [PATCH 01/15] net/cnxk: resolve sefgault caused during transmit completion Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 02/15] net/cnxk: fix data len for first seg with multi seg pkt Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 03/15] net/cnxk: release LBK bpid for after freeing resources Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 04/15] common/cnxk: add separate inline dev stats API Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 05/15] common/cnxk: distribute SQ's to sdp channels Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 06/15] common/cnxk: remove flow control config at queue setup Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 07/15] common/cnxk: enable 10K B0 support for inline IPsec Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 08/15] net/cnxk: check flow control config per queue on dev start Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 09/15] net/cnxk: don't allow PFC configuration on started port Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 10/15] net/cnxk: aura handle for fastpath Rx queues Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 11/15] common/cnxk: support of per NIX LF meta aura Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 12/15] common/cnxk: enable one to one SQ QINT mapping Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 13/15] common/cnxk: add RSS error messages on mbox failure Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 14/15] common/cnxk: add memory clobber to steor and ldeor Nithin Dabilpuram
2023-03-03 8:10 ` [PATCH 15/15] common/cnxk: enable SDP channel backpressure to TL4 Nithin Dabilpuram
2023-03-06 9:55 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).