* [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache
@ 2020-03-06 16:35 pbhagavatula
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
` (2 more replies)
0 siblings, 3 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-06 16:35 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Vamsi Attunuru, Kiran Kumar K
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock NPA aura and pool contexts in NDC cache.
The device args take hexadecimal bitmask where each bit represent the
corresponding aura/pool id.
Example:
-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/octeontx2.rst | 9 +++
doc/guides/mempool/octeontx2.rst | 9 +++
doc/guides/nics/octeontx2.rst | 9 +++
drivers/common/octeontx2/Makefile | 2 +-
drivers/common/octeontx2/meson.build | 2 +-
drivers/common/octeontx2/otx2_common.c | 35 ++++++++++
drivers/common/octeontx2/otx2_common.h | 3 +
.../rte_common_octeontx2_version.map | 7 ++
drivers/event/octeontx2/otx2_evdev.c | 2 +-
drivers/mempool/octeontx2/otx2_mempool.c | 1 +
drivers/mempool/octeontx2/otx2_mempool_ops.c | 68 +++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 1 +
12 files changed, 145 insertions(+), 3 deletions(-)
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index d4b2515ce..bde46fa70 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -148,6 +148,15 @@ Runtime Config Options
-w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+ -w 0002:0e:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 2c9a0953b..c594934d8 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -61,6 +61,15 @@ Runtime Config Options
provide ``max_pools`` parameter to the first PCIe device probed by the given
application.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec72..819d09e11 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -213,6 +213,15 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Limitations
-----------
diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile
index 48f033dc6..64c5e60e2 100644
--- a/drivers/common/octeontx2/Makefile
+++ b/drivers/common/octeontx2/Makefile
@@ -35,6 +35,6 @@ SRCS-y += otx2_common.c
SRCS-y += otx2_sec_idev.c
LDLIBS += -lrte_eal
-LDLIBS += -lrte_ethdev
+LDLIBS += -lrte_ethdev -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
index cc2c26123..bc4917b8c 100644
--- a/drivers/common/octeontx2/meson.build
+++ b/drivers/common/octeontx2/meson.build
@@ -23,6 +23,6 @@ foreach flag: extra_flags
endif
endforeach
-deps = ['eal', 'pci', 'ethdev']
+deps = ['eal', 'pci', 'ethdev', 'kvargs']
includes += include_directories('../../common/octeontx2',
'../../mempool/octeontx2', '../../bus/pci')
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
index 1a257cf07..684bb3a0f 100644
--- a/drivers/common/octeontx2/otx2_common.c
+++ b/drivers/common/octeontx2/otx2_common.c
@@ -169,6 +169,41 @@ int otx2_npa_lf_obj_ref(void)
return cnt ? 0 : -EINVAL;
}
+static int
+parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint64_t val;
+
+ val = strtoull(value, NULL, 16);
+
+ *(uint64_t *)extra_args = val;
+
+ return 0;
+}
+
+#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
+/*
+ * @internal
+ * Parse common device arguments
+ */
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
+{
+
+ struct otx2_idev_cfg *idev;
+ uint64_t npa_lock_mask;
+
+ idev = otx2_intra_dev_get_cfg();
+
+ if (idev == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
+ &parse_npa_lock_mask, &npa_lock_mask);
+
+ idev->npa_lock_mask = npa_lock_mask;
+}
+
/**
* @internal
*/
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
index bf5ea86b3..a90b0dcb3 100644
--- a/drivers/common/octeontx2/otx2_common.h
+++ b/drivers/common/octeontx2/otx2_common.h
@@ -8,6 +8,7 @@
#include <rte_atomic.h>
#include <rte_common.h>
#include <rte_cycles.h>
+#include <rte_kvargs.h>
#include <rte_memory.h>
#include <rte_memzone.h>
#include <rte_io.h>
@@ -65,6 +66,7 @@ struct otx2_idev_cfg {
rte_atomic16_t npa_refcnt;
uint16_t npa_refcnt_u16;
};
+ uint64_t npa_lock_mask;
};
struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
@@ -75,6 +77,7 @@ struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
int otx2_npa_lf_active(void *dev);
int otx2_npa_lf_obj_ref(void);
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
/* Log */
extern int otx2_logtype_base;
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 8f2404bd9..e070e898c 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -45,6 +45,13 @@ DPDK_20.0.1 {
otx2_sec_idev_tx_cpt_qp_put;
} DPDK_20.0;
+DPDK_20.0.2 {
+ global:
+
+ otx2_parse_common_devargs;
+
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index d20213d78..3d4b4aca4 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
&single_ws);
rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
-
+ otx2_parse_common_devargs(kvlist);
dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index 3a4a9425f..628d35aad 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs *devargs)
goto exit;
rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
exit:
return aura_sz;
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index ac2d61861..5075b027a 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -348,6 +348,7 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
struct npa_aq_enq_req *aura_init_req, *pool_init_req;
struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
+ struct otx2_idev_cfg *idev;
int rc, off;
aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
@@ -379,6 +380,46 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
return 0;
else
return NPA_LF_ERR_AURA_POOL_INIT;
+
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return 0;
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_init_req->aura_id = aura_id;
+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return 0;
+ }
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ otx2_err("Failed to LOCK POOL context");
+ return 0;
+ }
+ }
+ pool_init_req->aura_id = aura_id;
+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ otx2_err("Failed to lock POOL ctx to NDC");
+
+ return 0;
}
static int
@@ -390,6 +431,7 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
struct ndc_sync_op *ndc_req;
+ struct otx2_idev_cfg *idev;
int rc, off;
/* Procedure for disabling an aura/pool */
@@ -434,6 +476,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
return NPA_LF_ERR_AURA_POOL_FINI;
}
+
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return 0;
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_req->aura_id = aura_id;
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ otx2_err("Failed to unlock AURA ctx to NDC");
+
+ pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ pool_req->aura_id = aura_id;
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ otx2_err("Failed to unlock POOL ctx to NDC");
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index f29f01564..bc11f54b5 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -161,6 +161,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
null_devargs:
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-06 16:35 [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
@ 2020-03-06 16:35 ` pbhagavatula
2020-03-19 9:36 ` Andrzej Ostruszka
2020-03-19 9:36 ` [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Andrzej Ostruszka
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2 siblings, 1 reply; 28+ messages in thread
From: pbhagavatula @ 2020-03-06 16:35 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/nics/octeontx2.rst | 14 ++
drivers/net/octeontx2/otx2_ethdev.c | 179 +++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
| 23 +++
5 files changed, 231 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 819d09e11..6a13c9b26 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -207,6 +207,20 @@ Runtime Config Options
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+ -w 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+ -w 0002:02:00.0,lock_tx_ctx=1
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f4901c..592fef458 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -381,6 +381,38 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
goto fail;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK cq context");
+ return 0;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to LOCK rq context");
+ return 0;
+ }
+ }
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ otx2_err("Failed to LOCK rq context");
+ }
+
return 0;
fail:
return rc;
@@ -430,6 +462,38 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
return rc;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK cq context");
+ return 0;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to UNLOCK rq context");
+ return 0;
+ }
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ otx2_err("Failed to UNLOCK rq context");
+ }
+
return 0;
}
@@ -715,6 +779,90 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
return flags;
}
+static int
+nix_sqb_lock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return 0;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to LOCK POOL context");
+ return 0;
+ }
+ }
+
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0)
+ otx2_err("Unable to lock POOL in NDC");
+
+ return 0;
+}
+
+static int
+nix_sqb_unlock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK AURA context");
+ return 0;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to UNLOCK POOL context");
+ return 0;
+ }
+ }
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0)
+ otx2_err("Unable to UNLOCK AURA in NDC");
+
+ return 0;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -757,7 +905,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
/* Many to one reduction */
sq->sq.qint_idx = txq->sq % dev->qints;
- return otx2_mbox_process(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ if (dev->lock_tx_ctx) {
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ }
+
+ return rc;
}
static int
@@ -800,6 +961,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
if (rc)
return rc;
+ if (dev->lock_tx_ctx) {
+ /* Unlock sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ nix_sqb_unlock(txq->sqb_pool);
+ }
+
/* Read SQ and free sqb's */
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
aq->qidx = txq->sq;
@@ -921,6 +1096,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
}
nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+ if (dev->lock_tx_ctx)
+ nix_sqb_lock(txq->sqb_pool);
return 0;
fail:
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9f0..71f8cf729 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -287,6 +287,8 @@ struct otx2_eth_dev {
uint16_t scalar_ena;
uint16_t rss_tag_as_xor;
uint16_t max_sqb_count;
+ uint16_t lock_rx_ctx;
+ uint16_t lock_tx_ctx;
uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
uint64_t rx_offloads;
uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index bc11f54b5..0857d8247 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
#define OTX2_SWITCH_HEADER_TYPE "switch_header"
#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
+#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
+#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
int
otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
@@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
uint16_t switch_header_type = 0;
uint16_t flow_max_priority = 3;
uint16_t ipsec_in_max_spi = 1;
- uint16_t scalar_enable = 0;
uint16_t rss_tag_as_xor = 0;
+ uint16_t scalar_enable = 0;
struct rte_kvargs *kvlist;
+ uint16_t lock_rx_ctx = 0;
+ uint16_t lock_tx_ctx = 0;
if (devargs == NULL)
goto null_devargs;
@@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
+ &parse_flag, &lock_rx_ctx);
+ rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
+ &parse_flag, &lock_tx_ctx);
otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
@@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
dev->scalar_ena = scalar_enable;
dev->rss_tag_as_xor = rss_tag_as_xor;
dev->max_sqb_count = sqb_count;
+ dev->lock_rx_ctx = lock_rx_ctx;
+ dev->lock_tx_ctx = lock_tx_ctx;
dev->rss_info.rss_size = rss_size;
dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
dev->npc_flow.flow_max_priority = flow_max_priority;
@@ -187,4 +197,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
- OTX2_RSS_TAG_AS_XOR "=1");
+ OTX2_RSS_TAG_AS_XOR "=1"
+ OTX2_LOCK_RX_CTX "=<1-65535>"
+ OTX2_LOCK_TX_CTX "=<1-65535>");
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7a8c8f3de..34005ef02 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
req->qidx = (group * rss->rss_size) + idx;
req->ctype = NIX_AQ_CTYPE_RSS;
req->op = NIX_AQ_INSTOP_INIT;
+
+ if (!dev->lock_rx_ctx)
+ continue;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_LOCK;
}
otx2_mbox_msg_send(mbox, 0);
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-06 16:35 [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
@ 2020-03-19 9:36 ` Andrzej Ostruszka
2020-03-19 13:35 ` Pavan Nikhilesh Bhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2 siblings, 1 reply; 28+ messages in thread
From: Andrzej Ostruszka @ 2020-03-19 9:36 UTC (permalink / raw)
To: dev
On 3/6/20 5:35 PM, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock NPA aura and pool contexts in NDC cache.
> The device args take hexadecimal bitmask where each bit represent the
> corresponding aura/pool id.
> Example:
> -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
[...]
> +- ``Lock NPA contexts in NDC``
> +
> + Lock NPA aura and pool contexts in NDC cache.
> + The device args take hexadecimal bitmask where each bit represent the
> + corresponding aura/pool id.
> +
> + For example::
> + -w 0002:0e:00.0,npa_lock_mask=0xf
I think you need to make a paragraph break (empty line) after "::" in
order to have this example treated as "literal block" (same as max_pool
above - not visible in diff). At least it looks so when I build doc
with "ninja doc" and check the result in browser.
> diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
> index 2c9a0953b..c594934d8 100644
> --- a/doc/guides/mempool/octeontx2.rst
> +++ b/doc/guides/mempool/octeontx2.rst
> @@ -61,6 +61,15 @@ Runtime Config Options
> provide ``max_pools`` parameter to the first PCIe device probed by the given
> application.
>
> +- ``Lock NPA contexts in NDC``
> +
> + Lock NPA aura and pool contexts in NDC cache.
> + The device args take hexadecimal bitmask where each bit represent the
> + corresponding aura/pool id.
> +
> + For example::
> + -w 0002:02:00.0,npa_lock_mask=0xf
Ditto.
> diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
> index 60187ec72..819d09e11 100644
> --- a/doc/guides/nics/octeontx2.rst
> +++ b/doc/guides/nics/octeontx2.rst
> @@ -213,6 +213,15 @@ Runtime Config Options
> parameters to all the PCIe devices if application requires to configure on
> all the ethdev ports.
>
> +- ``Lock NPA contexts in NDC``
> +
> + Lock NPA aura and pool contexts in NDC cache.
> + The device args take hexadecimal bitmask where each bit represent the
> + corresponding aura/pool id.
> +
> + For example::
> + -w 0002:02:00.0,npa_lock_mask=0xf
Ditto - make that general comment (you might also want to fix other
places - not only those introduced).
[...]
> diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
> index 1a257cf07..684bb3a0f 100644
> --- a/drivers/common/octeontx2/otx2_common.c
> +++ b/drivers/common/octeontx2/otx2_common.c
> @@ -169,6 +169,41 @@ int otx2_npa_lf_obj_ref(void)
> return cnt ? 0 : -EINVAL;
> }
>
> +static int
> +parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
> +{
> + RTE_SET_USED(key);
> + uint64_t val;
> +
> + val = strtoull(value, NULL, 16);
> +
> + *(uint64_t *)extra_args = val;
> +
> + return 0;
> +}
> +
> +#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
> +/*
> + * @internal
> + * Parse common device arguments
> + */
> +void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
> +{
> +
> + struct otx2_idev_cfg *idev;
> + uint64_t npa_lock_mask;
Missing initialization of 'npa_lock_mask' - when user does not supply
this devarg then no callback is called and you copy this to idev (below).
> +
> + idev = otx2_intra_dev_get_cfg();
> +
> + if (idev == NULL)
> + return;
> +
> + rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
> + &parse_npa_lock_mask, &npa_lock_mask);
> +
> + idev->npa_lock_mask = npa_lock_mask;
> +}
[...]
> diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
> index ac2d61861..5075b027a 100644
> --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
> +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
> @@ -348,6 +348,7 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
> struct npa_aq_enq_req *aura_init_req, *pool_init_req;
> struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
> struct otx2_mbox_dev *mdev = &mbox->dev[0];
> + struct otx2_idev_cfg *idev;
> int rc, off;
>
> aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> @@ -379,6 +380,46 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
> return 0;
> else
> return NPA_LF_ERR_AURA_POOL_INIT;
> +
> + idev = otx2_intra_dev_get_cfg();
> + if (idev == NULL)
> + return 0;
Is this not an error?
> +
> + if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
> + return 0;
> +
> + aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> + aura_init_req->aura_id = aura_id;
> + aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
> + aura_init_req->op = NPA_AQ_INSTOP_LOCK;
> +
> + pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> + if (!pool_init_req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK AURA context");
> + return 0;
Same here and below - if these are not errors then maybe do not log them
as such. If they are errors then we should probably signal them via
return value ("return rc;").
> + }
> +
> + pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> + if (!pool_init_req) {
> + otx2_err("Failed to LOCK POOL context");
> + return 0;
See above.
> + }
> + }
> + pool_init_req->aura_id = aura_id;
> + pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
> + pool_init_req->op = NPA_AQ_INSTOP_LOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + otx2_err("Failed to lock POOL ctx to NDC");
See above.
> +
> + return > }
>
> static int
> @@ -390,6 +431,7 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
> struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
> struct otx2_mbox_dev *mdev = &mbox->dev[0];
> struct ndc_sync_op *ndc_req;
> + struct otx2_idev_cfg *idev;
> int rc, off;
>
> /* Procedure for disabling an aura/pool */
> @@ -434,6 +476,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
> otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
> return NPA_LF_ERR_AURA_POOL_FINI;
> }
> +
> + idev = otx2_intra_dev_get_cfg();
> + if (idev == NULL)
> + return 0;
> +
> + if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
> + return 0;
Same comments here and below as for *pool_init above.
> +
> + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> + aura_req->aura_id = aura_id;
> + aura_req->ctype = NPA_AQ_CTYPE_AURA;
> + aura_req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + otx2_err("Failed to unlock AURA ctx to NDC");
> +
> + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> + pool_req->aura_id = aura_id;
> + pool_req->ctype = NPA_AQ_CTYPE_POOL;
> + pool_req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + otx2_err("Failed to unlock POOL ctx to NDC");
> +
> return 0;
> }
With regards
Andrzej Ostruszka
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
@ 2020-03-19 9:36 ` Andrzej Ostruszka
2020-03-19 13:56 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 28+ messages in thread
From: Andrzej Ostruszka @ 2020-03-19 9:36 UTC (permalink / raw)
To: dev
On 3/6/20 5:35 PM, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock Rx/Tx contexts.
> Application can either choose to lock Rx or Tx contexts by using
> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>
> Example:
> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> doc/guides/nics/octeontx2.rst | 14 ++
> drivers/net/octeontx2/otx2_ethdev.c | 179 +++++++++++++++++++-
> drivers/net/octeontx2/otx2_ethdev.h | 2 +
> drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
> drivers/net/octeontx2/otx2_rss.c | 23 +++
> 5 files changed, 231 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
> index 819d09e11..6a13c9b26 100644
> --- a/doc/guides/nics/octeontx2.rst
> +++ b/doc/guides/nics/octeontx2.rst
> @@ -207,6 +207,20 @@ Runtime Config Options
> With the above configuration, application can enable inline IPsec processing
> on 128 SAs (SPI 0-127).
>
> +- ``Lock Rx contexts in NDC cache``
> +
> + Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
> +
> + For example::
> + -w 0002:02:00.0,lock_rx_ctx=1
"=1" is needed because of kvargs require it? If that is the case then
I'll think about extending kvargs to accept simple keys - this syntax
doesn't feel right when all one really wants is just to test the
presence of flag (for 1/true) or its lack (for 0/false).
BTW - extra line break after "::"
> +
> +- ``Lock Tx contexts in NDC cache``
> +
> + Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
> +
> + For example::
> + -w 0002:02:00.0,lock_tx_ctx=1
Same as above
[...]
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index e60f4901c..592fef458 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -381,6 +381,38 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> goto fail;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK cq context");
> + return 0;
Similar comments as for the previous patch. Is this not a failure? If
so why "return 0"? If not failure don't log it as an error. BTW here
the failure is not for locking but for flushing of the mbox (right?) so
maybe change the err-msg to differentiate from the case below.
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to LOCK rq context");
> + return 0;
Same as above - error or not?
> + }
> + }
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + otx2_err("Failed to LOCK rq context");
Ditto.
> + }
> +
> return 0;
> fail:
> return rc;
Same comments for *rq_uninit (below) as those for *rq_init (above).
> @@ -430,6 +462,38 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
> return rc;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = rxq->rq;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK cq context");
> + return 0;
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to UNLOCK rq context");
> + return 0;
> + }
> + }
> + aq->qidx = rxq->rq;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + otx2_err("Failed to UNLOCK rq context");
> + }
> +
> return 0;
> }
>
And the same set of comments apply below to *sqb_lock/unlock - in
addition one comment about err-msg.
> @@ -715,6 +779,90 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
> return flags;
> }
>
> +static int
> +nix_sqb_lock(struct rte_mempool *mp)
> +{
> + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
> + struct npa_aq_enq_req *req;
> + int rc;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_AURA;
> + req->op = NPA_AQ_INSTOP_LOCK;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(npa_lf->mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK AURA context");
> + return 0;
> + }
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + otx2_err("Failed to LOCK POOL context");
Apart from the general err-or-not-err comment here you have not
attempted to lock pool yet, you do that below. Just like before use
different msg 6 lines above (since you have only flushed - not attepmted
to lock) and here use the "Failed to LOCK AURA context". The same
comment applies below for unlock.
> + return 0;
> + }
> + }
> +
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_POOL;
> + req->op = NPA_AQ_INSTOP_LOCK;
> +
> + rc = otx2_mbox_process(npa_lf->mbox);
> + if (rc < 0)
> + otx2_err("Unable to lock POOL in NDC");
> +
> + return 0;
> +}
> +
> +static int
> +nix_sqb_unlock(struct rte_mempool *mp)
> +{
> + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
> + struct npa_aq_enq_req *req;
> + int rc;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_AURA;
> + req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(npa_lf->mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK AURA context");
> + return 0;
> + }
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + otx2_err("Failed to UNLOCK POOL context");
> + return 0;
> + }
> + }
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_POOL;
> + req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> + rc = otx2_mbox_process(npa_lf->mbox);
> + if (rc < 0)
> + otx2_err("Unable to UNLOCK AURA in NDC");
> +
> + return 0;
> +}
> +
> static int
> nix_sq_init(struct otx2_eth_txq *txq)
> {
> @@ -757,7 +905,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
> /* Many to one reduction */
> sq->sq.qint_idx = txq->sq % dev->qints;
>
> - return otx2_mbox_process(mbox);
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + return rc;
> +
> + if (dev->lock_tx_ctx) {
> + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + sq->qidx = txq->sq;
> + sq->ctype = NIX_AQ_CTYPE_SQ;
> + sq->op = NIX_AQ_INSTOP_LOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + }
> +
> + return rc;
> }
>
> static int
> @@ -800,6 +961,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
> if (rc)
> return rc;
>
> + if (dev->lock_tx_ctx) {
> + /* Unlock sq */
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = txq->sq;
> + aq->ctype = NIX_AQ_CTYPE_SQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + return rc;
> +
> + nix_sqb_unlock(txq->sqb_pool);
> + }
> +
> /* Read SQ and free sqb's */
> aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> aq->qidx = txq->sq;
> @@ -921,6 +1096,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
> }
>
> nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
> + if (dev->lock_tx_ctx)
> + nix_sqb_lock(txq->sqb_pool);
>
> return 0;
> fail:
> diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
> index e5684f9f0..71f8cf729 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.h
> +++ b/drivers/net/octeontx2/otx2_ethdev.h
> @@ -287,6 +287,8 @@ struct otx2_eth_dev {
> uint16_t scalar_ena;
> uint16_t rss_tag_as_xor;
> uint16_t max_sqb_count;
> + uint16_t lock_rx_ctx;
> + uint16_t lock_tx_ctx;
> uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
> uint64_t rx_offloads;
> uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
> diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
> index bc11f54b5..0857d8247 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
> @@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
> #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
> #define OTX2_SWITCH_HEADER_TYPE "switch_header"
> #define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
> +#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
> +#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
>
> int
> otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> @@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> uint16_t switch_header_type = 0;
> uint16_t flow_max_priority = 3;
> uint16_t ipsec_in_max_spi = 1;
> - uint16_t scalar_enable = 0;
> uint16_t rss_tag_as_xor = 0;
> + uint16_t scalar_enable = 0;
> struct rte_kvargs *kvlist;
> + uint16_t lock_rx_ctx = 0;
> + uint16_t lock_tx_ctx = 0;
>
> if (devargs == NULL)
> goto null_devargs;
> @@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> &parse_switch_header_type, &switch_header_type);
> rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
> &parse_flag, &rss_tag_as_xor);
> + rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
> + &parse_flag, &lock_rx_ctx);
> + rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
> + &parse_flag, &lock_tx_ctx);
> otx2_parse_common_devargs(kvlist);
> rte_kvargs_free(kvlist);
>
> @@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> dev->scalar_ena = scalar_enable;
> dev->rss_tag_as_xor = rss_tag_as_xor;
> dev->max_sqb_count = sqb_count;
> + dev->lock_rx_ctx = lock_rx_ctx;
> + dev->lock_tx_ctx = lock_tx_ctx;
> dev->rss_info.rss_size = rss_size;
> dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
> dev->npc_flow.flow_max_priority = flow_max_priority;
> @@ -187,4 +197,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
> OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
> OTX2_FLOW_MAX_PRIORITY "=<1-32>"
> OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
> - OTX2_RSS_TAG_AS_XOR "=1");
> + OTX2_RSS_TAG_AS_XOR "=1"
> + OTX2_LOCK_RX_CTX "=<1-65535>"
> + OTX2_LOCK_TX_CTX "=<1-65535>");
AFAIU the "=1" is required due to kvargs parsing limitation. But why
the range here is 1-65535 when all you want is just a boolean flag (if
the key is present or not).
> diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
> index 7a8c8f3de..34005ef02 100644
> --- a/drivers/net/octeontx2/otx2_rss.c
> +++ b/drivers/net/octeontx2/otx2_rss.c
> @@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
> req->qidx = (group * rss->rss_size) + idx;
> req->ctype = NIX_AQ_CTYPE_RSS;
> req->op = NIX_AQ_INSTOP_INIT;
> +
> + if (!dev->lock_rx_ctx)
> + continue;
> +
> + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0)
> + return rc;
> +
> + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!req)
> + return -ENOMEM;
> + }
> + req->rss.rq = ind_tbl[idx];
> + /* Fill AQ info */
> + req->qidx = (group * rss->rss_size) + idx;
> + req->ctype = NIX_AQ_CTYPE_RSS;
> + req->op = NIX_AQ_INSTOP_LOCK;
> }
>
> otx2_mbox_msg_send(mbox, 0);
>
And here you treat the locking errors as errors - so I think you need to
just adapt to this style and fix the previous comments.
With regards
Andrzej Ostruszka
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-19 9:36 ` [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Andrzej Ostruszka
@ 2020-03-19 13:35 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 28+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-03-19 13:35 UTC (permalink / raw)
To: Andrzej Ostruszka, dev; +Cc: Jerin Jacob Kollanukkaran
>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Andrzej Ostruszka
>Sent: Thursday, March 19, 2020 3:07 PM
>To: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs
>to lock ctx in cache
>
>On 3/6/20 5:35 PM, pbhagavatula@marvell.com wrote:
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Add device arguments to lock NPA aura and pool contexts in NDC
>cache.
>> The device args take hexadecimal bitmask where each bit represent
>the
>> corresponding aura/pool id.
>> Example:
>> -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>[...]
>> +- ``Lock NPA contexts in NDC``
>> +
>> + Lock NPA aura and pool contexts in NDC cache.
>> + The device args take hexadecimal bitmask where each bit
>represent the
>> + corresponding aura/pool id.
>> +
>> + For example::
>> + -w 0002:0e:00.0,npa_lock_mask=0xf
>
>I think you need to make a paragraph break (empty line) after "::" in
>order to have this example treated as "literal block" (same as max_pool
>above - not visible in diff). At least it looks so when I build doc
>with "ninja doc" and check the result in browser.
Will fix in v2.
>
>> diff --git a/doc/guides/mempool/octeontx2.rst
>b/doc/guides/mempool/octeontx2.rst
>> index 2c9a0953b..c594934d8 100644
>> --- a/doc/guides/mempool/octeontx2.rst
>> +++ b/doc/guides/mempool/octeontx2.rst
>> @@ -61,6 +61,15 @@ Runtime Config Options
>> provide ``max_pools`` parameter to the first PCIe device probed by
>the given
>> application.
>>
>> +- ``Lock NPA contexts in NDC``
>> +
>> + Lock NPA aura and pool contexts in NDC cache.
>> + The device args take hexadecimal bitmask where each bit
>represent the
>> + corresponding aura/pool id.
>> +
>> + For example::
>> + -w 0002:02:00.0,npa_lock_mask=0xf
>
>Ditto.
>
>> diff --git a/doc/guides/nics/octeontx2.rst
>b/doc/guides/nics/octeontx2.rst
>> index 60187ec72..819d09e11 100644
>> --- a/doc/guides/nics/octeontx2.rst
>> +++ b/doc/guides/nics/octeontx2.rst
>> @@ -213,6 +213,15 @@ Runtime Config Options
>> parameters to all the PCIe devices if application requires to
>configure on
>> all the ethdev ports.
>>
>> +- ``Lock NPA contexts in NDC``
>> +
>> + Lock NPA aura and pool contexts in NDC cache.
>> + The device args take hexadecimal bitmask where each bit
>represent the
>> + corresponding aura/pool id.
>> +
>> + For example::
>> + -w 0002:02:00.0,npa_lock_mask=0xf
>
>Ditto - make that general comment (you might also want to fix other
>places - not only those introduced).
>
>[...]
>> diff --git a/drivers/common/octeontx2/otx2_common.c
>b/drivers/common/octeontx2/otx2_common.c
>> index 1a257cf07..684bb3a0f 100644
>> --- a/drivers/common/octeontx2/otx2_common.c
>> +++ b/drivers/common/octeontx2/otx2_common.c
>> @@ -169,6 +169,41 @@ int otx2_npa_lf_obj_ref(void)
>> return cnt ? 0 : -EINVAL;
>> }
>>
>> +static int
>> +parse_npa_lock_mask(const char *key, const char *value, void
>*extra_args)
>> +{
>> + RTE_SET_USED(key);
>> + uint64_t val;
>> +
>> + val = strtoull(value, NULL, 16);
>> +
>> + *(uint64_t *)extra_args = val;
>> +
>> + return 0;
>> +}
>> +
>> +#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
>> +/*
>> + * @internal
>> + * Parse common device arguments
>> + */
>> +void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
>> +{
>> +
>> + struct otx2_idev_cfg *idev;
>> + uint64_t npa_lock_mask;
>
>Missing initialization of 'npa_lock_mask' - when user does not supply
>this devarg then no callback is called and you copy this to idev (below).
Will fix in v2.
>
>> +
>> + idev = otx2_intra_dev_get_cfg();
>> +
>> + if (idev == NULL)
>> + return;
>> +
>> + rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
>> + &parse_npa_lock_mask, &npa_lock_mask);
>> +
>> + idev->npa_lock_mask = npa_lock_mask;
>> +}
>[...]
>> diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c
>b/drivers/mempool/octeontx2/otx2_mempool_ops.c
>> index ac2d61861..5075b027a 100644
>> --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
>> +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
>> @@ -348,6 +348,7 @@ npa_lf_aura_pool_init(struct otx2_mbox
>*mbox, uint32_t aura_id,
>> struct npa_aq_enq_req *aura_init_req, *pool_init_req;
>> struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
>> struct otx2_mbox_dev *mdev = &mbox->dev[0];
>> + struct otx2_idev_cfg *idev;
>> int rc, off;
>>
>> aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>> @@ -379,6 +380,46 @@ npa_lf_aura_pool_init(struct otx2_mbox
>*mbox, uint32_t aura_id,
>> return 0;
>> else
>> return NPA_LF_ERR_AURA_POOL_INIT;
>> +
>> + idev = otx2_intra_dev_get_cfg();
>> + if (idev == NULL)
>> + return 0;
>
>Is this not an error?
I think that condition would never be true as it is a part of device probe
and we would exit the application there.
I will move the condition above before sending the mbox message just to
be safe.
>
>> +
>> + if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
>> + return 0;
>> +
>> + aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>> + aura_init_req->aura_id = aura_id;
>> + aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
>> + aura_init_req->op = NPA_AQ_INSTOP_LOCK;
>> +
>> + pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>> + if (!pool_init_req) {
>> + /* The shared memory buffer can be full.
>> + * Flush it and retry
>> + */
>> + otx2_mbox_msg_send(mbox, 0);
>> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> + if (rc < 0) {
>> + otx2_err("Failed to LOCK AURA context");
>> + return 0;
>
>Same here and below - if these are not errors then maybe do not log
>them
>as such. If they are errors then we should probably signal them via
>return value ("return rc;").
These are not catastrophic errors since locking is first come first serve and
pool can still function without locking.
I have logged them as errors for debuggability since the application has
requested through devargs.
>
>> + }
>> +
>> + pool_init_req =
>otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>> + if (!pool_init_req) {
>> + otx2_err("Failed to LOCK POOL context");
>> + return 0;
>
>See above.
>
>> + }
>> + }
>> + pool_init_req->aura_id = aura_id;
>> + pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
>> + pool_init_req->op = NPA_AQ_INSTOP_LOCK;
>> +
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + otx2_err("Failed to lock POOL ctx to NDC");
>
>See above.
>
>> +
>> + return > }
>>
>> static int
>> @@ -390,6 +431,7 @@ npa_lf_aura_pool_fini(struct otx2_mbox
>*mbox,
>> struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
>> struct otx2_mbox_dev *mdev = &mbox->dev[0];
>> struct ndc_sync_op *ndc_req;
>> + struct otx2_idev_cfg *idev;
>> int rc, off;
>>
>> /* Procedure for disabling an aura/pool */
>> @@ -434,6 +476,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox
>*mbox,
>> otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
>> return NPA_LF_ERR_AURA_POOL_FINI;
>> }
>> +
>> + idev = otx2_intra_dev_get_cfg();
>> + if (idev == NULL)
>> + return 0;
>> +
>> + if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
>> + return 0;
>
>Same comments here and below as for *pool_init above.
>
>> +
>> + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>> + aura_req->aura_id = aura_id;
>> + aura_req->ctype = NPA_AQ_CTYPE_AURA;
>> + aura_req->op = NPA_AQ_INSTOP_UNLOCK;
>> +
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + otx2_err("Failed to unlock AURA ctx to NDC");
>> +
>> + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>> + pool_req->aura_id = aura_id;
>> + pool_req->ctype = NPA_AQ_CTYPE_POOL;
>> + pool_req->op = NPA_AQ_INSTOP_UNLOCK;
>> +
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + otx2_err("Failed to unlock POOL ctx to NDC");
>> +
>> return 0;
>> }
>With regards
>Andrzej Ostruszka
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-19 9:36 ` Andrzej Ostruszka
@ 2020-03-19 13:56 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 28+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-03-19 13:56 UTC (permalink / raw)
To: Andrzej Ostruszka, dev
>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Andrzej Ostruszka
>Sent: Thursday, March 19, 2020 3:07 PM
>To: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock
>Rx/Tx ctx
>
>On 3/6/20 5:35 PM, pbhagavatula@marvell.com wrote:
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Add device arguments to lock Rx/Tx contexts.
>> Application can either choose to lock Rx or Tx contexts by using
>> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>>
>> Example:
>> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>> doc/guides/nics/octeontx2.rst | 14 ++
>> drivers/net/octeontx2/otx2_ethdev.c | 179
>+++++++++++++++++++-
>> drivers/net/octeontx2/otx2_ethdev.h | 2 +
>> drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
>> drivers/net/octeontx2/otx2_rss.c | 23 +++
>> 5 files changed, 231 insertions(+), 3 deletions(-)
>>
>> diff --git a/doc/guides/nics/octeontx2.rst
>b/doc/guides/nics/octeontx2.rst
>> index 819d09e11..6a13c9b26 100644
>> --- a/doc/guides/nics/octeontx2.rst
>> +++ b/doc/guides/nics/octeontx2.rst
>> @@ -207,6 +207,20 @@ Runtime Config Options
>> With the above configuration, application can enable inline IPsec
>processing
>> on 128 SAs (SPI 0-127).
>>
>> +- ``Lock Rx contexts in NDC cache``
>> +
>> + Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
>> +
>> + For example::
>> + -w 0002:02:00.0,lock_rx_ctx=1
>
>"=1" is needed because of kvargs require it? If that is the case then
>I'll think about extending kvargs to accept simple keys - this syntax
>doesn't feel right when all one really wants is just to test the
>presence of flag (for 1/true) or its lack (for 0/false).
>
Kvargs requirea key value pair with RTE_KVARGS_KV_DELIM `=` as the delimiter.
>BTW - extra line break after "::"
Will fix in v2.
>
>> +
>> +- ``Lock Tx contexts in NDC cache``
>> +
>> + Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
>> +
>> + For example::
>> + -w 0002:02:00.0,lock_tx_ctx=1
>
>Same as above
>
>[...]
>> diff --git a/drivers/net/octeontx2/otx2_ethdev.c
>b/drivers/net/octeontx2/otx2_ethdev.c
>> index e60f4901c..592fef458 100644
>> --- a/drivers/net/octeontx2/otx2_ethdev.c
>> +++ b/drivers/net/octeontx2/otx2_ethdev.c
>> @@ -381,6 +381,38 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev,
>struct otx2_eth_dev *dev,
>> goto fail;
>> }
>>
>> + if (dev->lock_rx_ctx) {
>> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + aq->qidx = qid;
>> + aq->ctype = NIX_AQ_CTYPE_CQ;
>> + aq->op = NIX_AQ_INSTOP_LOCK;
>> +
>> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + if (!aq) {
>> + /* The shared memory buffer can be full.
>> + * Flush it and retry
>> + */
>> + otx2_mbox_msg_send(mbox, 0);
>> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> + if (rc < 0) {
>> + otx2_err("Failed to LOCK cq context");
>> + return 0;
>
>Similar comments as for the previous patch. Is this not a failure? If
>so why "return 0"? If not failure don't log it as an error.
Since, NDC locking is done as FIFO Locking might fail but it is not a
catastrophic failure as NIX will function normally. I'd still like to log
it as an error for visibility.
> BTW here
>the failure is not for locking but for flushing of the mbox (right?)
otx2_mbox_msg_send will never fail. We might get timeout while waiting for the
response which we should consider as AQ instruction failure.
>so maybe change the err-msg to differentiate from the case below.
>
>> + }
>> +
>> + aq =
>otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + if (!aq) {
>> + otx2_err("Failed to LOCK rq context");
>> + return 0;
>
>Same as above - error or not?
>
>> + }
>> + }
>> + aq->qidx = qid;
>> + aq->ctype = NIX_AQ_CTYPE_RQ;
>> + aq->op = NIX_AQ_INSTOP_LOCK;
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + otx2_err("Failed to LOCK rq context");
>
>Ditto.
>
>> + }
>> +
>> return 0;
>> fail:
>> return rc;
>
>Same comments for *rq_uninit (below) as those for *rq_init (above).
>
>> @@ -430,6 +462,38 @@ nix_cq_rq_uninit(struct rte_eth_dev
>*eth_dev, struct otx2_eth_rxq *rxq)
>> return rc;
>> }
>>
>> + if (dev->lock_rx_ctx) {
>> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + aq->qidx = rxq->rq;
>> + aq->ctype = NIX_AQ_CTYPE_CQ;
>> + aq->op = NIX_AQ_INSTOP_UNLOCK;
>> +
>> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + if (!aq) {
>> + /* The shared memory buffer can be full.
>> + * Flush it and retry
>> + */
>> + otx2_mbox_msg_send(mbox, 0);
>> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> + if (rc < 0) {
>> + otx2_err("Failed to UNLOCK cq
>context");
>> + return 0;
>> + }
>> +
>> + aq =
>otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + if (!aq) {
>> + otx2_err("Failed to UNLOCK rq
>context");
>> + return 0;
>> + }
>> + }
>> + aq->qidx = rxq->rq;
>> + aq->ctype = NIX_AQ_CTYPE_RQ;
>> + aq->op = NIX_AQ_INSTOP_UNLOCK;
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + otx2_err("Failed to UNLOCK rq context");
>> + }
>> +
>> return 0;
>> }
>>
>
>And the same set of comments apply below to *sqb_lock/unlock - in
>addition one comment about err-msg.
>
>> @@ -715,6 +779,90 @@ nix_tx_offload_flags(struct rte_eth_dev
>*eth_dev)
>> return flags;
>> }
>>
>> +static int
>> +nix_sqb_lock(struct rte_mempool *mp)
>> +{
>> + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
>> + struct npa_aq_enq_req *req;
>> + int rc;
>> +
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> + req->ctype = NPA_AQ_CTYPE_AURA;
>> + req->op = NPA_AQ_INSTOP_LOCK;
>> +
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> + if (!req) {
>> + /* The shared memory buffer can be full.
>> + * Flush it and retry
>> + */
>> + otx2_mbox_msg_send(npa_lf->mbox, 0);
>> + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
>> + if (rc < 0) {
>> + otx2_err("Failed to LOCK AURA context");
>> + return 0;
>> + }
>> +
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf-
>>mbox);
>> + if (!req) {
>> + otx2_err("Failed to LOCK POOL context");
>
>Apart from the general err-or-not-err comment here you have not
>attempted to lock pool yet, you do that below. Just like before use
>different msg 6 lines above (since you have only flushed - not
>attepmted
>to lock) and here use the "Failed to LOCK AURA context". The same
>comment applies below for unlock.
>
AURA message would have already been delivered to AQ here. I think a more
appropriate message would be "failed to get mbox memory for locking pool ctx."
>> + return 0;
>> + }
>> + }
>> +
>> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> + req->ctype = NPA_AQ_CTYPE_POOL;
>> + req->op = NPA_AQ_INSTOP_LOCK;
>> +
>> + rc = otx2_mbox_process(npa_lf->mbox);
>> + if (rc < 0)
>> + otx2_err("Unable to lock POOL in NDC");
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +nix_sqb_unlock(struct rte_mempool *mp)
>> +{
>> + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
>> + struct npa_aq_enq_req *req;
>> + int rc;
>> +
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> + req->ctype = NPA_AQ_CTYPE_AURA;
>> + req->op = NPA_AQ_INSTOP_UNLOCK;
>> +
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> + if (!req) {
>> + /* The shared memory buffer can be full.
>> + * Flush it and retry
>> + */
>> + otx2_mbox_msg_send(npa_lf->mbox, 0);
>> + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
>> + if (rc < 0) {
>> + otx2_err("Failed to UNLOCK AURA context");
>> + return 0;
>> + }
>> +
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf-
>>mbox);
>> + if (!req) {
>> + otx2_err("Failed to UNLOCK POOL context");
>> + return 0;
>> + }
>> + }
>> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
>> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
>> + req->ctype = NPA_AQ_CTYPE_POOL;
>> + req->op = NPA_AQ_INSTOP_UNLOCK;
>> +
>> + rc = otx2_mbox_process(npa_lf->mbox);
>> + if (rc < 0)
>> + otx2_err("Unable to UNLOCK AURA in NDC");
>> +
>> + return 0;
>> +}
>> +
>> static int
>> nix_sq_init(struct otx2_eth_txq *txq)
>> {
>> @@ -757,7 +905,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
>> /* Many to one reduction */
>> sq->sq.qint_idx = txq->sq % dev->qints;
>>
>> - return otx2_mbox_process(mbox);
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + return rc;
>> +
>> + if (dev->lock_tx_ctx) {
>> + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + sq->qidx = txq->sq;
>> + sq->ctype = NIX_AQ_CTYPE_SQ;
>> + sq->op = NIX_AQ_INSTOP_LOCK;
>> +
>> + rc = otx2_mbox_process(mbox);
>> + }
>> +
>> + return rc;
>> }
>>
>> static int
>> @@ -800,6 +961,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
>> if (rc)
>> return rc;
>>
>> + if (dev->lock_tx_ctx) {
>> + /* Unlock sq */
>> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + aq->qidx = txq->sq;
>> + aq->ctype = NIX_AQ_CTYPE_SQ;
>> + aq->op = NIX_AQ_INSTOP_UNLOCK;
>> +
>> + rc = otx2_mbox_process(mbox);
>> + if (rc < 0)
>> + return rc;
>> +
>> + nix_sqb_unlock(txq->sqb_pool);
>> + }
>> +
>> /* Read SQ and free sqb's */
>> aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> aq->qidx = txq->sq;
>> @@ -921,6 +1096,8 @@ nix_alloc_sqb_pool(int port, struct
>otx2_eth_txq *txq, uint16_t nb_desc)
>> }
>>
>> nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
>> + if (dev->lock_tx_ctx)
>> + nix_sqb_lock(txq->sqb_pool);
>>
>> return 0;
>> fail:
>> diff --git a/drivers/net/octeontx2/otx2_ethdev.h
>b/drivers/net/octeontx2/otx2_ethdev.h
>> index e5684f9f0..71f8cf729 100644
>> --- a/drivers/net/octeontx2/otx2_ethdev.h
>> +++ b/drivers/net/octeontx2/otx2_ethdev.h
>> @@ -287,6 +287,8 @@ struct otx2_eth_dev {
>> uint16_t scalar_ena;
>> uint16_t rss_tag_as_xor;
>> uint16_t max_sqb_count;
>> + uint16_t lock_rx_ctx;
>> + uint16_t lock_tx_ctx;
>> uint16_t rx_offload_flags; /* Selected Rx offload
>flags(NIX_RX_*_F) */
>> uint64_t rx_offloads;
>> uint16_t tx_offload_flags; /* Selected Tx offload
>flags(NIX_TX_*_F) */
>> diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c
>b/drivers/net/octeontx2/otx2_ethdev_devargs.c
>> index bc11f54b5..0857d8247 100644
>> --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
>> +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
>> @@ -124,6 +124,8 @@ parse_switch_header_type(const char *key,
>const char *value, void *extra_args)
>> #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
>> #define OTX2_SWITCH_HEADER_TYPE "switch_header"
>> #define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
>> +#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
>> +#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
>>
>> int
>> otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct
>otx2_eth_dev *dev)
>> @@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct
>rte_devargs *devargs, struct otx2_eth_dev *dev)
>> uint16_t switch_header_type = 0;
>> uint16_t flow_max_priority = 3;
>> uint16_t ipsec_in_max_spi = 1;
>> - uint16_t scalar_enable = 0;
>> uint16_t rss_tag_as_xor = 0;
>> + uint16_t scalar_enable = 0;
>> struct rte_kvargs *kvlist;
>> + uint16_t lock_rx_ctx = 0;
>> + uint16_t lock_tx_ctx = 0;
>>
>> if (devargs == NULL)
>> goto null_devargs;
>> @@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct
>rte_devargs *devargs, struct otx2_eth_dev *dev)
>> &parse_switch_header_type,
>&switch_header_type);
>> rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
>> &parse_flag, &rss_tag_as_xor);
>> + rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
>> + &parse_flag, &lock_rx_ctx);
>> + rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
>> + &parse_flag, &lock_tx_ctx);
>> otx2_parse_common_devargs(kvlist);
>> rte_kvargs_free(kvlist);
>>
>> @@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct
>rte_devargs *devargs, struct otx2_eth_dev *dev)
>> dev->scalar_ena = scalar_enable;
>> dev->rss_tag_as_xor = rss_tag_as_xor;
>> dev->max_sqb_count = sqb_count;
>> + dev->lock_rx_ctx = lock_rx_ctx;
>> + dev->lock_tx_ctx = lock_tx_ctx;
>> dev->rss_info.rss_size = rss_size;
>> dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
>> dev->npc_flow.flow_max_priority = flow_max_priority;
>> @@ -187,4 +197,6 @@
>RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
>> OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
>> OTX2_FLOW_MAX_PRIORITY "=<1-32>"
>> OTX2_SWITCH_HEADER_TYPE
>"=<higig2|dsa>"
>> - OTX2_RSS_TAG_AS_XOR "=1");
>> + OTX2_RSS_TAG_AS_XOR "=1"
>> + OTX2_LOCK_RX_CTX "=<1-65535>"
>> + OTX2_LOCK_TX_CTX "=<1-65535>");
>
>AFAIU the "=1" is required due to kvargs parsing limitation. But why
>the range here is 1-65535 when all you want is just a boolean flag (if
>the key is present or not).
>
My bad we were debating whether it should be per Rx/Tx queue which would
use a mask. I will fix it in v2.
>> diff --git a/drivers/net/octeontx2/otx2_rss.c
>b/drivers/net/octeontx2/otx2_rss.c
>> index 7a8c8f3de..34005ef02 100644
>> --- a/drivers/net/octeontx2/otx2_rss.c
>> +++ b/drivers/net/octeontx2/otx2_rss.c
>> @@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev
>*dev,
>> req->qidx = (group * rss->rss_size) + idx;
>> req->ctype = NIX_AQ_CTYPE_RSS;
>> req->op = NIX_AQ_INSTOP_INIT;
>> +
>> + if (!dev->lock_rx_ctx)
>> + continue;
>> +
>> + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + if (!req) {
>> + /* The shared memory buffer can be full.
>> + * Flush it and retry
>> + */
>> + otx2_mbox_msg_send(mbox, 0);
>> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
>> + if (rc < 0)
>> + return rc;
>> +
>> + req =
>otx2_mbox_alloc_msg_nix_aq_enq(mbox);
>> + if (!req)
>> + return -ENOMEM;
>> + }
>> + req->rss.rq = ind_tbl[idx];
>> + /* Fill AQ info */
>> + req->qidx = (group * rss->rss_size) + idx;
>> + req->ctype = NIX_AQ_CTYPE_RSS;
>> + req->op = NIX_AQ_INSTOP_LOCK;
>> }
>>
>> otx2_mbox_msg_send(mbox, 0);
>>
>
>And here you treat the locking errors as errors - so I think you need to
>just adapt to this style and fix the previous comments.
I think here we should continue creating the RSS table without locking it as
It is not mandatory. I will modify accordingly in v2.
>
>With regards
>Andrzej Ostruszka
Thanks,
Pavan.
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-06 16:35 [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-19 9:36 ` [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Andrzej Ostruszka
@ 2020-03-24 16:53 ` pbhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
` (3 more replies)
2 siblings, 4 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-24 16:53 UTC (permalink / raw)
To: jerinj, aostruszka, Pavan Nikhilesh, John McNamara,
Marko Kovacevic, Nithin Dabilpuram, Vamsi Attunuru,
Kiran Kumar K
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock NPA aura and pool contexts in NDC cache.
The device args take hexadecimal bitmask where each bit represent the
corresponding aura/pool id.
Example:
-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
v2 Changes:
- Fix formatting in doc(Andrzej).
- Add error returns for all failures(Andrzej).
- Fix devargs parameter list(Andrzej).
doc/guides/eventdevs/octeontx2.rst | 10 +++
doc/guides/mempool/octeontx2.rst | 10 +++
doc/guides/nics/octeontx2.rst | 12 +++
drivers/common/octeontx2/Makefile | 2 +-
drivers/common/octeontx2/meson.build | 2 +-
drivers/common/octeontx2/otx2_common.c | 34 +++++++++
drivers/common/octeontx2/otx2_common.h | 5 ++
.../rte_common_octeontx2_version.map | 7 ++
drivers/event/octeontx2/otx2_evdev.c | 5 +-
drivers/mempool/octeontx2/otx2_mempool.c | 4 +-
drivers/mempool/octeontx2/otx2_mempool_ops.c | 74 +++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 4 +-
12 files changed, 163 insertions(+), 6 deletions(-)
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index d4b2515ce..6502f6415 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -148,6 +148,16 @@ Runtime Config Options
-w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:0e:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 2c9a0953b..49b45a04e 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -61,6 +61,16 @@ Runtime Config Options
provide ``max_pools`` parameter to the first PCIe device probed by the given
application.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec72..c2d87c9d0 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -194,6 +194,7 @@ Runtime Config Options
Setting this flag to 1 to select the legacy mode.
For example to select the legacy mode(RSS tag adder as XOR)::
+
-w 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -202,6 +203,7 @@ Runtime Config Options
``ipsec_in_max_spi`` ``devargs`` parameter.
For example::
+
-w 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
@@ -213,6 +215,16 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Limitations
-----------
diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile
index 48f033dc6..64c5e60e2 100644
--- a/drivers/common/octeontx2/Makefile
+++ b/drivers/common/octeontx2/Makefile
@@ -35,6 +35,6 @@ SRCS-y += otx2_common.c
SRCS-y += otx2_sec_idev.c
LDLIBS += -lrte_eal
-LDLIBS += -lrte_ethdev
+LDLIBS += -lrte_ethdev -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
index cc2c26123..bc4917b8c 100644
--- a/drivers/common/octeontx2/meson.build
+++ b/drivers/common/octeontx2/meson.build
@@ -23,6 +23,6 @@ foreach flag: extra_flags
endif
endforeach
-deps = ['eal', 'pci', 'ethdev']
+deps = ['eal', 'pci', 'ethdev', 'kvargs']
includes += include_directories('../../common/octeontx2',
'../../mempool/octeontx2', '../../bus/pci')
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
index 1a257cf07..5e7272f69 100644
--- a/drivers/common/octeontx2/otx2_common.c
+++ b/drivers/common/octeontx2/otx2_common.c
@@ -169,6 +169,40 @@ int otx2_npa_lf_obj_ref(void)
return cnt ? 0 : -EINVAL;
}
+static int
+parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint64_t val;
+
+ val = strtoull(value, NULL, 16);
+
+ *(uint64_t *)extra_args = val;
+
+ return 0;
+}
+
+/*
+ * @internal
+ * Parse common device arguments
+ */
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
+{
+
+ struct otx2_idev_cfg *idev;
+ uint64_t npa_lock_mask = 0;
+
+ idev = otx2_intra_dev_get_cfg();
+
+ if (idev == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
+ &parse_npa_lock_mask, &npa_lock_mask);
+
+ idev->npa_lock_mask = npa_lock_mask;
+}
+
/**
* @internal
*/
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
index bf5ea86b3..b3fdefe95 100644
--- a/drivers/common/octeontx2/otx2_common.h
+++ b/drivers/common/octeontx2/otx2_common.h
@@ -8,6 +8,7 @@
#include <rte_atomic.h>
#include <rte_common.h>
#include <rte_cycles.h>
+#include <rte_kvargs.h>
#include <rte_memory.h>
#include <rte_memzone.h>
#include <rte_io.h>
@@ -49,6 +50,8 @@
(~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
#endif
+#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
+
/* Compiler attributes */
#ifndef __hot
#define __hot __attribute__((hot))
@@ -65,6 +68,7 @@ struct otx2_idev_cfg {
rte_atomic16_t npa_refcnt;
uint16_t npa_refcnt_u16;
};
+ uint64_t npa_lock_mask;
};
struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
@@ -75,6 +79,7 @@ struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
int otx2_npa_lf_active(void *dev);
int otx2_npa_lf_obj_ref(void);
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
/* Log */
extern int otx2_logtype_base;
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 8f2404bd9..e070e898c 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -45,6 +45,13 @@ DPDK_20.0.1 {
otx2_sec_idev_tx_cpt_qp_put;
} DPDK_20.0;
+DPDK_20.0.2 {
+ global:
+
+ otx2_parse_common_devargs;
+
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index d20213d78..630073de5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
&single_ws);
rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
-
+ otx2_parse_common_devargs(kvlist);
dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
@@ -1821,4 +1821,5 @@ RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
OTX2_SSO_SINGLE_WS "=1"
OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_SELFTEST "=1");
+ OTX2_SSO_SELFTEST "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index 3a4a9425f..fb630fecf 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs *devargs)
goto exit;
rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
exit:
return aura_sz;
@@ -452,4 +453,5 @@ RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>");
+ OTX2_MAX_POOLS "=<128-1048576>"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index ac2d61861..1cc34f0d1 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -348,8 +348,13 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
struct npa_aq_enq_req *aura_init_req, *pool_init_req;
struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -ENOMEM;
+
aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
aura_init_req->aura_id = aura_id;
@@ -379,6 +384,44 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
return 0;
else
return NPA_LF_ERR_AURA_POOL_INIT;
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_init_req->aura_id = aura_id;
+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return -ENOMEM;
+ }
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ pool_init_req->aura_id = aura_id;
+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to lock POOL ctx to NDC");
+ return -ENOMEM;
+ }
+
+ return 0;
}
static int
@@ -390,8 +433,13 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
struct ndc_sync_op *ndc_req;
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -EINVAL;
+
/* Procedure for disabling an aura/pool */
rte_delay_us(10);
npa_lf_aura_op_alloc(aura_handle, 0);
@@ -434,6 +482,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
return NPA_LF_ERR_AURA_POOL_FINI;
}
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_req->aura_id = aura_id;
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock AURA ctx to NDC");
+ return -EINVAL;
+ }
+
+ pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ pool_req->aura_id = aura_id;
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock POOL ctx to NDC");
+ return -EINVAL;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index f29f01564..5390eb217 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -161,6 +161,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
null_devargs:
@@ -186,4 +187,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
- OTX2_RSS_TAG_AS_XOR "=1");
+ OTX2_RSS_TAG_AS_XOR "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
@ 2020-03-24 16:53 ` pbhagavatula
2020-03-25 6:51 ` [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Jerin Jacob
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-24 16:53 UTC (permalink / raw)
To: jerinj, aostruszka, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/nics/octeontx2.rst | 16 ++
drivers/net/octeontx2/otx2_ethdev.c | 187 +++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
| 23 +++
5 files changed, 241 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index c2d87c9d0..df19443e3 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -209,6 +209,22 @@ Runtime Config Options
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_tx_ctx=1
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f4901c..6369c2fa9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
goto fail;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK cq context");
+ goto fail;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to LOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK rq context");
+ goto fail;
+ }
+ }
+
return 0;
fail:
return rc;
@@ -430,6 +464,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
return rc;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to UNLOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK rq context");
+ return rc;
+ }
+ }
+
return 0;
}
@@ -715,6 +783,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
return flags;
}
+static int
+nix_sqb_lock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to lock POOL in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+nix_sqb_unlock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to UNLOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to UNLOCK AURA in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -757,7 +913,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
/* Many to one reduction */
sq->sq.qint_idx = txq->sq % dev->qints;
- return otx2_mbox_process(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ if (dev->lock_tx_ctx) {
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ }
+
+ return rc;
}
static int
@@ -800,6 +969,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
if (rc)
return rc;
+ if (dev->lock_tx_ctx) {
+ /* Unlock sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ nix_sqb_unlock(txq->sqb_pool);
+ }
+
/* Read SQ and free sqb's */
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
aq->qidx = txq->sq;
@@ -921,6 +1104,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
}
nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+ if (dev->lock_tx_ctx)
+ nix_sqb_lock(txq->sqb_pool);
return 0;
fail:
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9f0..90ca8cbed 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -272,6 +272,8 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint8_t lock_rx_ctx;
+ uint8_t lock_tx_ctx;
uint16_t flags;
uint16_t cints;
uint16_t qints;
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 5390eb217..e8eba3d91 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
#define OTX2_SWITCH_HEADER_TYPE "switch_header"
#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
+#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
+#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
int
otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
@@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
uint16_t switch_header_type = 0;
uint16_t flow_max_priority = 3;
uint16_t ipsec_in_max_spi = 1;
- uint16_t scalar_enable = 0;
uint16_t rss_tag_as_xor = 0;
+ uint16_t scalar_enable = 0;
struct rte_kvargs *kvlist;
+ uint8_t lock_rx_ctx = 0;
+ uint8_t lock_tx_ctx = 0;
if (devargs == NULL)
goto null_devargs;
@@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
+ &parse_flag, &lock_rx_ctx);
+ rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
+ &parse_flag, &lock_tx_ctx);
otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
@@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
dev->scalar_ena = scalar_enable;
dev->rss_tag_as_xor = rss_tag_as_xor;
dev->max_sqb_count = sqb_count;
+ dev->lock_rx_ctx = lock_rx_ctx;
+ dev->lock_tx_ctx = lock_tx_ctx;
dev->rss_info.rss_size = rss_size;
dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
dev->npc_flow.flow_max_priority = flow_max_priority;
@@ -188,4 +198,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
+ OTX2_NPA_LOCK_MASK "=<1-65535>"
+ OTX2_LOCK_RX_CTX "=1"
+ OTX2_LOCK_TX_CTX "=1");
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7a8c8f3de..34005ef02 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
req->qidx = (group * rss->rss_size) + idx;
req->ctype = NIX_AQ_CTYPE_RSS;
req->op = NIX_AQ_INSTOP_INIT;
+
+ if (!dev->lock_rx_ctx)
+ continue;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_LOCK;
}
otx2_mbox_msg_send(mbox, 0);
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
@ 2020-03-25 6:51 ` Jerin Jacob
2020-03-26 6:33 ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-26 6:34 ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
3 siblings, 0 replies; 28+ messages in thread
From: Jerin Jacob @ 2020-03-25 6:51 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Andrzej Ostruszka, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Vamsi Attunuru, Kiran Kumar K, dpdk-dev
On Tue, Mar 24, 2020 at 10:23 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock NPA aura and pool contexts in NDC cache.
> The device args take hexadecimal bitmask where each bit represent the
> corresponding aura/pool id.
> Example:
> -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Please split this series as two as 1/2 needs to go through master and
2/2 needs to go through next-net-mrvl.
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-25 6:51 ` [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Jerin Jacob
@ 2020-03-26 6:33 ` pbhagavatula
2020-03-26 15:56 ` Andrzej Ostruszka [C]
2020-03-27 9:53 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-03-26 6:34 ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
3 siblings, 2 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-26 6:33 UTC (permalink / raw)
To: jerinj, aostruszka, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
v3 Changes:
- Split series into individual patches as targets are different.
doc/guides/nics/octeontx2.rst | 16 ++
drivers/net/octeontx2/otx2_ethdev.c | 187 +++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
| 23 +++
5 files changed, 241 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index c2d87c9d0..df19443e3 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -209,6 +209,22 @@ Runtime Config Options
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_tx_ctx=1
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f4901c..6369c2fa9 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
goto fail;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK cq context");
+ goto fail;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to LOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK rq context");
+ goto fail;
+ }
+ }
+
return 0;
fail:
return rc;
@@ -430,6 +464,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
return rc;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to UNLOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK rq context");
+ return rc;
+ }
+ }
+
return 0;
}
@@ -715,6 +783,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
return flags;
}
+static int
+nix_sqb_lock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to lock POOL in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+nix_sqb_unlock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to UNLOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to UNLOCK AURA in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -757,7 +913,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
/* Many to one reduction */
sq->sq.qint_idx = txq->sq % dev->qints;
- return otx2_mbox_process(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ if (dev->lock_tx_ctx) {
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ }
+
+ return rc;
}
static int
@@ -800,6 +969,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
if (rc)
return rc;
+ if (dev->lock_tx_ctx) {
+ /* Unlock sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ nix_sqb_unlock(txq->sqb_pool);
+ }
+
/* Read SQ and free sqb's */
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
aq->qidx = txq->sq;
@@ -921,6 +1104,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
}
nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+ if (dev->lock_tx_ctx)
+ nix_sqb_lock(txq->sqb_pool);
return 0;
fail:
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9f0..90ca8cbed 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -272,6 +272,8 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint8_t lock_rx_ctx;
+ uint8_t lock_tx_ctx;
uint16_t flags;
uint16_t cints;
uint16_t qints;
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 5390eb217..e8eba3d91 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
#define OTX2_SWITCH_HEADER_TYPE "switch_header"
#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
+#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
+#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
int
otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
@@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
uint16_t switch_header_type = 0;
uint16_t flow_max_priority = 3;
uint16_t ipsec_in_max_spi = 1;
- uint16_t scalar_enable = 0;
uint16_t rss_tag_as_xor = 0;
+ uint16_t scalar_enable = 0;
struct rte_kvargs *kvlist;
+ uint8_t lock_rx_ctx = 0;
+ uint8_t lock_tx_ctx = 0;
if (devargs == NULL)
goto null_devargs;
@@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
+ &parse_flag, &lock_rx_ctx);
+ rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
+ &parse_flag, &lock_tx_ctx);
otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
@@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
dev->scalar_ena = scalar_enable;
dev->rss_tag_as_xor = rss_tag_as_xor;
dev->max_sqb_count = sqb_count;
+ dev->lock_rx_ctx = lock_rx_ctx;
+ dev->lock_tx_ctx = lock_tx_ctx;
dev->rss_info.rss_size = rss_size;
dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
dev->npc_flow.flow_max_priority = flow_max_priority;
@@ -188,4 +198,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
+ OTX2_NPA_LOCK_MASK "=<1-65535>"
+ OTX2_LOCK_RX_CTX "=1"
+ OTX2_LOCK_TX_CTX "=1");
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7a8c8f3de..34005ef02 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
req->qidx = (group * rss->rss_size) + idx;
req->ctype = NIX_AQ_CTYPE_RSS;
req->op = NIX_AQ_INSTOP_INIT;
+
+ if (!dev->lock_rx_ctx)
+ continue;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_LOCK;
}
otx2_mbox_msg_send(mbox, 0);
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
` (2 preceding siblings ...)
2020-03-26 6:33 ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
@ 2020-03-26 6:34 ` pbhagavatula
2020-04-06 8:39 ` Jerin Jacob
2020-04-22 8:06 ` [dpdk-dev] [PATCH v4] " pbhagavatula
3 siblings, 2 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-26 6:34 UTC (permalink / raw)
To: jerinj, aostruszka, Pavan Nikhilesh, John McNamara,
Marko Kovacevic, Nithin Dabilpuram, Vamsi Attunuru,
Kiran Kumar K
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock NPA aura and pool contexts in NDC cache.
The device args take hexadecimal bitmask where each bit represent the
corresponding aura/pool id.
Example:
-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
v3 Changes:
- Split series into individual patches as targets are different.
v2 Changes:
- Fix formatting in doc(Andrzej).
- Add error returns for all failures(Andrzej).
- Fix devargs parameter list(Andrzej).
doc/guides/eventdevs/octeontx2.rst | 10 +++
doc/guides/mempool/octeontx2.rst | 10 +++
doc/guides/nics/octeontx2.rst | 12 +++
drivers/common/octeontx2/Makefile | 2 +-
drivers/common/octeontx2/meson.build | 2 +-
drivers/common/octeontx2/otx2_common.c | 34 +++++++++
drivers/common/octeontx2/otx2_common.h | 5 ++
.../rte_common_octeontx2_version.map | 7 ++
drivers/event/octeontx2/otx2_evdev.c | 5 +-
drivers/mempool/octeontx2/otx2_mempool.c | 4 +-
drivers/mempool/octeontx2/otx2_mempool_ops.c | 74 +++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 4 +-
12 files changed, 163 insertions(+), 6 deletions(-)
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index d4b2515ce..6502f6415 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -148,6 +148,16 @@ Runtime Config Options
-w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:0e:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 2c9a0953b..49b45a04e 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -61,6 +61,16 @@ Runtime Config Options
provide ``max_pools`` parameter to the first PCIe device probed by the given
application.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec72..c2d87c9d0 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -194,6 +194,7 @@ Runtime Config Options
Setting this flag to 1 to select the legacy mode.
For example to select the legacy mode(RSS tag adder as XOR)::
+
-w 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -202,6 +203,7 @@ Runtime Config Options
``ipsec_in_max_spi`` ``devargs`` parameter.
For example::
+
-w 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
@@ -213,6 +215,16 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Limitations
-----------
diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile
index 48f033dc6..64c5e60e2 100644
--- a/drivers/common/octeontx2/Makefile
+++ b/drivers/common/octeontx2/Makefile
@@ -35,6 +35,6 @@ SRCS-y += otx2_common.c
SRCS-y += otx2_sec_idev.c
LDLIBS += -lrte_eal
-LDLIBS += -lrte_ethdev
+LDLIBS += -lrte_ethdev -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
index cc2c26123..bc4917b8c 100644
--- a/drivers/common/octeontx2/meson.build
+++ b/drivers/common/octeontx2/meson.build
@@ -23,6 +23,6 @@ foreach flag: extra_flags
endif
endforeach
-deps = ['eal', 'pci', 'ethdev']
+deps = ['eal', 'pci', 'ethdev', 'kvargs']
includes += include_directories('../../common/octeontx2',
'../../mempool/octeontx2', '../../bus/pci')
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
index 1a257cf07..5e7272f69 100644
--- a/drivers/common/octeontx2/otx2_common.c
+++ b/drivers/common/octeontx2/otx2_common.c
@@ -169,6 +169,40 @@ int otx2_npa_lf_obj_ref(void)
return cnt ? 0 : -EINVAL;
}
+static int
+parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint64_t val;
+
+ val = strtoull(value, NULL, 16);
+
+ *(uint64_t *)extra_args = val;
+
+ return 0;
+}
+
+/*
+ * @internal
+ * Parse common device arguments
+ */
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
+{
+
+ struct otx2_idev_cfg *idev;
+ uint64_t npa_lock_mask = 0;
+
+ idev = otx2_intra_dev_get_cfg();
+
+ if (idev == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
+ &parse_npa_lock_mask, &npa_lock_mask);
+
+ idev->npa_lock_mask = npa_lock_mask;
+}
+
/**
* @internal
*/
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
index bf5ea86b3..b3fdefe95 100644
--- a/drivers/common/octeontx2/otx2_common.h
+++ b/drivers/common/octeontx2/otx2_common.h
@@ -8,6 +8,7 @@
#include <rte_atomic.h>
#include <rte_common.h>
#include <rte_cycles.h>
+#include <rte_kvargs.h>
#include <rte_memory.h>
#include <rte_memzone.h>
#include <rte_io.h>
@@ -49,6 +50,8 @@
(~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
#endif
+#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
+
/* Compiler attributes */
#ifndef __hot
#define __hot __attribute__((hot))
@@ -65,6 +68,7 @@ struct otx2_idev_cfg {
rte_atomic16_t npa_refcnt;
uint16_t npa_refcnt_u16;
};
+ uint64_t npa_lock_mask;
};
struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
@@ -75,6 +79,7 @@ struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
int otx2_npa_lf_active(void *dev);
int otx2_npa_lf_obj_ref(void);
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
/* Log */
extern int otx2_logtype_base;
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 8f2404bd9..e070e898c 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -45,6 +45,13 @@ DPDK_20.0.1 {
otx2_sec_idev_tx_cpt_qp_put;
} DPDK_20.0;
+DPDK_20.0.2 {
+ global:
+
+ otx2_parse_common_devargs;
+
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index d20213d78..630073de5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
&single_ws);
rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
-
+ otx2_parse_common_devargs(kvlist);
dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
@@ -1821,4 +1821,5 @@ RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
OTX2_SSO_SINGLE_WS "=1"
OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_SELFTEST "=1");
+ OTX2_SSO_SELFTEST "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index 3a4a9425f..fb630fecf 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs *devargs)
goto exit;
rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
exit:
return aura_sz;
@@ -452,4 +453,5 @@ RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>");
+ OTX2_MAX_POOLS "=<128-1048576>"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index ac2d61861..1cc34f0d1 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -348,8 +348,13 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
struct npa_aq_enq_req *aura_init_req, *pool_init_req;
struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -ENOMEM;
+
aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
aura_init_req->aura_id = aura_id;
@@ -379,6 +384,44 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
return 0;
else
return NPA_LF_ERR_AURA_POOL_INIT;
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_init_req->aura_id = aura_id;
+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return -ENOMEM;
+ }
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ pool_init_req->aura_id = aura_id;
+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to lock POOL ctx to NDC");
+ return -ENOMEM;
+ }
+
+ return 0;
}
static int
@@ -390,8 +433,13 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
struct ndc_sync_op *ndc_req;
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -EINVAL;
+
/* Procedure for disabling an aura/pool */
rte_delay_us(10);
npa_lf_aura_op_alloc(aura_handle, 0);
@@ -434,6 +482,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
return NPA_LF_ERR_AURA_POOL_FINI;
}
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_req->aura_id = aura_id;
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock AURA ctx to NDC");
+ return -EINVAL;
+ }
+
+ pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ pool_req->aura_id = aura_id;
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock POOL ctx to NDC");
+ return -EINVAL;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index f29f01564..5390eb217 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -161,6 +161,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
null_devargs:
@@ -186,4 +187,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
- OTX2_RSS_TAG_AS_XOR "=1");
+ OTX2_RSS_TAG_AS_XOR "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-26 6:33 ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
@ 2020-03-26 15:56 ` Andrzej Ostruszka [C]
2020-03-27 9:53 ` [dpdk-dev] [PATCH v4] " pbhagavatula
1 sibling, 0 replies; 28+ messages in thread
From: Andrzej Ostruszka [C] @ 2020-03-26 15:56 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda, John McNamara,
Marko Kovacevic
Cc: dev
On 3/26/20 7:33 AM, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock Rx/Tx contexts.
> Application can either choose to lock Rx or Tx contexts by using
> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>
> Example:
> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> v3 Changes:
> - Split series into individual patches as targets are different.
You might need to insert also some "Depends-on:" or something like that
to mark that this patch depends on common changes in the other one. I'm
not sure how this should work when one is dedicated to master and one
for next-marvell.
[...]
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index e60f4901c..6369c2fa9 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> goto fail;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK cq context");
> + goto fail;
This fail doesn't do anything interesting so I would remove it and
replace all "goto fail" with "return rc". That way you would be
consistent (e.g. below you return -ENOMEM). Just like you do in
nix_cq_rq_uninit() - below.
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to LOCK rq context");
> + return -ENOMEM;
> + }
> + }
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK rq context");
> + goto fail;
> + }
> + }
> +
> return 0;
> fail:
> return rc;
> @@ -430,6 +464,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
> return rc;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = rxq->rq;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK cq context");
> + return rc;
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to UNLOCK rq context");
> + return -ENOMEM;
> + }
> + }
> + aq->qidx = rxq->rq;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK rq context");
> + return rc;
> + }
> + }
> +
> return 0;
> }
[...]
> diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
> index 5390eb217..e8eba3d91 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
> @@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
> #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
> #define OTX2_SWITCH_HEADER_TYPE "switch_header"
> #define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
> +#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
> +#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
>
> int
> otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> @@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> uint16_t switch_header_type = 0;
> uint16_t flow_max_priority = 3;
> uint16_t ipsec_in_max_spi = 1;
> - uint16_t scalar_enable = 0;
> uint16_t rss_tag_as_xor = 0;
> + uint16_t scalar_enable = 0;
> struct rte_kvargs *kvlist;
> + uint8_t lock_rx_ctx = 0;
> + uint8_t lock_tx_ctx = 0;
I missed that previously, but these needs to be uint16_t. This is
because you call parse_flag() which is treating its extra_arg as pointer
to uint16_t.
> if (devargs == NULL)
> goto null_devargs;
> @@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> &parse_switch_header_type, &switch_header_type);
> rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
> &parse_flag, &rss_tag_as_xor);
> + rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
> + &parse_flag, &lock_rx_ctx);
> + rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
> + &parse_flag, &lock_tx_ctx);
> otx2_parse_common_devargs(kvlist);
> rte_kvargs_free(kvlist);
[...]
With that uint16_t fix above:
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
With regards
Andrzej Ostruszka
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v4] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-26 6:33 ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-26 15:56 ` Andrzej Ostruszka [C]
@ 2020-03-27 9:53 ` pbhagavatula
2020-03-27 16:19 ` Andrzej Ostruszka
2020-03-31 13:58 ` [dpdk-dev] [PATCH v5] " pbhagavatula
1 sibling, 2 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-27 9:53 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
---
Depends on http://patches.dpdk.org/patch/67178/
v4 Changes:
- Fix return path using unnecessary goto.(Andrzej)
- Fix datatype of values passed to devargs parser.(Andrzej)
v3 Changes:
- Split series into individual patches as targets are different.
doc/guides/nics/octeontx2.rst | 16 ++
drivers/net/octeontx2/otx2_ethdev.c | 187 +++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
| 23 +++
5 files changed, 241 insertions(+), 3 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index c2d87c9d0..df19443e3 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -209,6 +209,22 @@ Runtime Config Options
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_tx_ctx=1
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f4901c..a6f2c0f42 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
goto fail;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to LOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK rq context");
+ return rc;
+ }
+ }
+
return 0;
fail:
return rc;
@@ -430,6 +464,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
return rc;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to UNLOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK rq context");
+ return rc;
+ }
+ }
+
return 0;
}
@@ -715,6 +783,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
return flags;
}
+static int
+nix_sqb_lock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to lock POOL in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+nix_sqb_unlock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to UNLOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to UNLOCK AURA in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -757,7 +913,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
/* Many to one reduction */
sq->sq.qint_idx = txq->sq % dev->qints;
- return otx2_mbox_process(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ if (dev->lock_tx_ctx) {
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ }
+
+ return rc;
}
static int
@@ -800,6 +969,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
if (rc)
return rc;
+ if (dev->lock_tx_ctx) {
+ /* Unlock sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ nix_sqb_unlock(txq->sqb_pool);
+ }
+
/* Read SQ and free sqb's */
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
aq->qidx = txq->sq;
@@ -921,6 +1104,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
}
nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+ if (dev->lock_tx_ctx)
+ nix_sqb_lock(txq->sqb_pool);
return 0;
fail:
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9f0..90ca8cbed 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -272,6 +272,8 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint8_t lock_rx_ctx;
+ uint8_t lock_tx_ctx;
uint16_t flags;
uint16_t cints;
uint16_t qints;
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 5390eb217..b0480504c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
#define OTX2_SWITCH_HEADER_TYPE "switch_header"
#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
+#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
+#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
int
otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
@@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
uint16_t switch_header_type = 0;
uint16_t flow_max_priority = 3;
uint16_t ipsec_in_max_spi = 1;
- uint16_t scalar_enable = 0;
uint16_t rss_tag_as_xor = 0;
+ uint16_t scalar_enable = 0;
struct rte_kvargs *kvlist;
+ uint16_t lock_rx_ctx = 0;
+ uint16_t lock_tx_ctx = 0;
if (devargs == NULL)
goto null_devargs;
@@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
+ &parse_flag, &lock_rx_ctx);
+ rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
+ &parse_flag, &lock_tx_ctx);
otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
@@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
dev->scalar_ena = scalar_enable;
dev->rss_tag_as_xor = rss_tag_as_xor;
dev->max_sqb_count = sqb_count;
+ dev->lock_rx_ctx = lock_rx_ctx;
+ dev->lock_tx_ctx = lock_tx_ctx;
dev->rss_info.rss_size = rss_size;
dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
dev->npc_flow.flow_max_priority = flow_max_priority;
@@ -188,4 +198,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
+ OTX2_NPA_LOCK_MASK "=<1-65535>"
+ OTX2_LOCK_RX_CTX "=1"
+ OTX2_LOCK_TX_CTX "=1");
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7a8c8f3de..34005ef02 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
req->qidx = (group * rss->rss_size) + idx;
req->ctype = NIX_AQ_CTYPE_RSS;
req->op = NIX_AQ_INSTOP_INIT;
+
+ if (!dev->lock_rx_ctx)
+ continue;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_LOCK;
}
otx2_mbox_msg_send(mbox, 0);
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v4] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-27 9:53 ` [dpdk-dev] [PATCH v4] " pbhagavatula
@ 2020-03-27 16:19 ` Andrzej Ostruszka
2020-03-27 17:49 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-03-31 13:58 ` [dpdk-dev] [PATCH v5] " pbhagavatula
1 sibling, 1 reply; 28+ messages in thread
From: Andrzej Ostruszka @ 2020-03-27 16:19 UTC (permalink / raw)
To: pbhagavatula, jerinj, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: dev
On 3/27/20 10:53 AM, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock Rx/Tx contexts.
> Application can either choose to lock Rx or Tx contexts by using
> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>
> Example:
> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
> ---
> Depends on http://patches.dpdk.org/patch/67178/
> v4 Changes:
> - Fix return path using unnecessary goto.(Andrzej)
> - Fix datatype of values passed to devargs parser.(Andrzej)
>
> v3 Changes:
> - Split series into individual patches as targets are different.
>
> doc/guides/nics/octeontx2.rst | 16 ++
> drivers/net/octeontx2/otx2_ethdev.c | 187 +++++++++++++++++++-
> drivers/net/octeontx2/otx2_ethdev.h | 2 +
> drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
> drivers/net/octeontx2/otx2_rss.c | 23 +++
> 5 files changed, 241 insertions(+), 3 deletions(-)
[...]
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index e60f4901c..a6f2c0f42 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> goto fail;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK cq context");
> + return rc;
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to LOCK rq context");
> + return -ENOMEM;
> + }
> + }
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK rq context");
> + return rc;
> + }
> + }
> +
> return 0;
> fail:
> return rc;
Pavan - sorry for being so ... finicky :)
I've said 'replace all "goto fail" with "return rc"' and I meant that.
So not only the "goto fail" in you changes but all "goto fail" in that
function.
Apart from that:
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
With regards
Andrzej
PS. Thanks for the patience ;)
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v4] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-27 16:19 ` Andrzej Ostruszka
@ 2020-03-27 17:49 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 28+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-03-27 17:49 UTC (permalink / raw)
To: Andrzej Ostruszka, Jerin Jacob Kollanukkaran,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda, John McNamara,
Marko Kovacevic
Cc: dev
<Snip>
>> fail:
>> return rc;
>
>Pavan - sorry for being so ... finicky :)
>I've said 'replace all "goto fail" with "return rc"' and I meant that.
>So not only the "goto fail" in you changes but all "goto fail" in that
>function.
Ah, sure I will send a v5.
>
>Apart from that:
>Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
>
>With regards
>Andrzej
>
>PS. Thanks for the patience ;)
😊
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v5] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-27 9:53 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-03-27 16:19 ` Andrzej Ostruszka
@ 2020-03-31 13:58 ` pbhagavatula
2020-06-26 5:00 ` Jerin Jacob
2020-06-28 22:18 ` [dpdk-dev] [PATCH v6] " pbhagavatula
1 sibling, 2 replies; 28+ messages in thread
From: pbhagavatula @ 2020-03-31 13:58 UTC (permalink / raw)
To: jerinj, aostruszka, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
---
Depends on http://patches.dpdk.org/patch/67178/
v5 Changes:
- Remove redundant goto.(Andrzej)
v4 Changes:
- Fix return path using unnecessary goto.(Andrzej)
- Fix datatype of values passed to devargs parser.(Andrzej)
v3 Changes:
- Split series into individual patches as targets are different.
doc/guides/nics/octeontx2.rst | 16 ++
drivers/net/octeontx2/otx2_ethdev.c | 196 +++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
| 23 +++
5 files changed, 244 insertions(+), 9 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index c2d87c9d0..df19443e3 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -209,6 +209,22 @@ Runtime Config Options
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_tx_ctx=1
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e60f4901c..49bf7ef9f 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -286,8 +286,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
NIX_CQ_ALIGN, dev->node);
if (rz == NULL) {
otx2_err("Failed to allocate mem for cq hw ring");
- rc = -ENOMEM;
- goto fail;
+ return -ENOMEM;
}
memset(rz->addr, 0, rz->len);
rxq->desc = (uintptr_t)rz->addr;
@@ -336,7 +335,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
rc = otx2_mbox_process(mbox);
if (rc) {
otx2_err("Failed to init cq context");
- goto fail;
+ return rc;
}
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
@@ -378,12 +377,44 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
rc = otx2_mbox_process(mbox);
if (rc) {
otx2_err("Failed to init rq context");
- goto fail;
+ return rc;
+ }
+
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to LOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK rq context");
+ return rc;
+ }
}
return 0;
-fail:
- return rc;
}
static int
@@ -430,6 +461,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
return rc;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to UNLOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK rq context");
+ return rc;
+ }
+ }
+
return 0;
}
@@ -715,6 +780,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
return flags;
}
+static int
+nix_sqb_lock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to lock POOL in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+nix_sqb_unlock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to UNLOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to UNLOCK AURA in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -757,7 +910,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
/* Many to one reduction */
sq->sq.qint_idx = txq->sq % dev->qints;
- return otx2_mbox_process(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ if (dev->lock_tx_ctx) {
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ }
+
+ return rc;
}
static int
@@ -800,6 +966,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
if (rc)
return rc;
+ if (dev->lock_tx_ctx) {
+ /* Unlock sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ nix_sqb_unlock(txq->sqb_pool);
+ }
+
/* Read SQ and free sqb's */
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
aq->qidx = txq->sq;
@@ -921,6 +1101,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
}
nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+ if (dev->lock_tx_ctx)
+ nix_sqb_lock(txq->sqb_pool);
return 0;
fail:
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e5684f9f0..90ca8cbed 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -272,6 +272,8 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint8_t lock_rx_ctx;
+ uint8_t lock_tx_ctx;
uint16_t flags;
uint16_t cints;
uint16_t qints;
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 5390eb217..b0480504c 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -124,6 +124,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
#define OTX2_SWITCH_HEADER_TYPE "switch_header"
#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
+#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
+#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
int
otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
@@ -134,9 +136,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
uint16_t switch_header_type = 0;
uint16_t flow_max_priority = 3;
uint16_t ipsec_in_max_spi = 1;
- uint16_t scalar_enable = 0;
uint16_t rss_tag_as_xor = 0;
+ uint16_t scalar_enable = 0;
struct rte_kvargs *kvlist;
+ uint16_t lock_rx_ctx = 0;
+ uint16_t lock_tx_ctx = 0;
if (devargs == NULL)
goto null_devargs;
@@ -161,6 +165,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
+ &parse_flag, &lock_rx_ctx);
+ rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
+ &parse_flag, &lock_tx_ctx);
otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
@@ -169,6 +177,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
dev->scalar_ena = scalar_enable;
dev->rss_tag_as_xor = rss_tag_as_xor;
dev->max_sqb_count = sqb_count;
+ dev->lock_rx_ctx = lock_rx_ctx;
+ dev->lock_tx_ctx = lock_tx_ctx;
dev->rss_info.rss_size = rss_size;
dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
dev->npc_flow.flow_max_priority = flow_max_priority;
@@ -188,4 +198,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
+ OTX2_NPA_LOCK_MASK "=<1-65535>"
+ OTX2_LOCK_RX_CTX "=1"
+ OTX2_LOCK_TX_CTX "=1");
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7a8c8f3de..34005ef02 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
req->qidx = (group * rss->rss_size) + idx;
req->ctype = NIX_AQ_CTYPE_RSS;
req->op = NIX_AQ_INSTOP_INIT;
+
+ if (!dev->lock_rx_ctx)
+ continue;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_LOCK;
}
otx2_mbox_msg_send(mbox, 0);
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-26 6:34 ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
@ 2020-04-06 8:39 ` Jerin Jacob
2020-04-16 22:33 ` Thomas Monjalon
2020-04-22 8:06 ` [dpdk-dev] [PATCH v4] " pbhagavatula
1 sibling, 1 reply; 28+ messages in thread
From: Jerin Jacob @ 2020-04-06 8:39 UTC (permalink / raw)
To: Pavan Nikhilesh, Thomas Monjalon, David Marchand
Cc: Jerin Jacob, Andrzej Ostruszka, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Vamsi Attunuru, Kiran Kumar K, dpdk-dev
On Thu, Mar 26, 2020 at 12:04 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock NPA aura and pool contexts in NDC cache.
> The device args take hexadecimal bitmask where each bit represent the
> corresponding aura/pool id.
> Example:
> -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Since it is a mempool driver patch, request @Thomas Monjalon or @David
Marchand to take it through the master.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache
2020-04-06 8:39 ` Jerin Jacob
@ 2020-04-16 22:33 ` Thomas Monjalon
2020-04-21 7:37 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 28+ messages in thread
From: Thomas Monjalon @ 2020-04-16 22:33 UTC (permalink / raw)
To: Pavan Nikhilesh, Jerin Jacob
Cc: David Marchand, dev, Jerin Jacob, Andrzej Ostruszka,
John McNamara, Marko Kovacevic, Nithin Dabilpuram,
Vamsi Attunuru, Kiran Kumar K
06/04/2020 10:39, Jerin Jacob:
> On Thu, Mar 26, 2020 at 12:04 PM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Add device arguments to lock NPA aura and pool contexts in NDC cache.
> > The device args take hexadecimal bitmask where each bit represent the
> > corresponding aura/pool id.
> > Example:
> > -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
>
> Since it is a mempool driver patch, request @Thomas Monjalon or @David
> Marchand to take it through the master.
I see this warning:
ERROR: symbol otx2_parse_common_devargs is added in the DPDK_20.0.2 section,
but is expected to be added in the EXPERIMENTAL section of the version map
Ideally the symbol should be marked with __rte_internal.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache
2020-04-16 22:33 ` Thomas Monjalon
@ 2020-04-21 7:37 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 28+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-04-21 7:37 UTC (permalink / raw)
To: Thomas Monjalon, Jerin Jacob
Cc: David Marchand, dev, Jerin Jacob Kollanukkaran,
Andrzej Ostruszka [C],
John McNamara, Marko Kovacevic, Nithin Kumar Dabilpuram,
Vamsi Krishna Attunuru, Kiran Kumar Kokkilagadda
>06/04/2020 10:39, Jerin Jacob:
>> On Thu, Mar 26, 2020 at 12:04 PM <pbhagavatula@marvell.com>
>wrote:
>> >
>> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> >
>> > Add device arguments to lock NPA aura and pool contexts in NDC
>cache.
>> > The device args take hexadecimal bitmask where each bit represent
>the
>> > corresponding aura/pool id.
>> > Example:
>> > -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>> >
>> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Acked-by: Jerin Jacob <jerinj@marvell.com>
>>
>> Since it is a mempool driver patch, request @Thomas Monjalon or
>@David
>> Marchand to take it through the master.
>
>I see this warning:
>ERROR: symbol otx2_parse_common_devargs is added in the
>DPDK_20.0.2 section,
>but is expected to be added in the EXPERIMENTAL section of the
>version map
>
>Ideally the symbol should be marked with __rte_internal.
>
Will send a v4 thanks.
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v4] mempool/octeontx2: add devargs to lock ctx in cache
2020-03-26 6:34 ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-04-06 8:39 ` Jerin Jacob
@ 2020-04-22 8:06 ` pbhagavatula
2020-05-01 10:21 ` Pavan Nikhilesh Bhagavatula
2020-05-11 10:07 ` [dpdk-dev] [PATCH v5] " pbhagavatula
1 sibling, 2 replies; 28+ messages in thread
From: pbhagavatula @ 2020-04-22 8:06 UTC (permalink / raw)
To: jerinj, thomas, Pavan Nikhilesh, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Kiran Kumar K
Cc: aostruszka, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock NPA aura and pool contexts in NDC cache.
The device args take hexadecimal bitmask where each bit represent the
corresponding aura/pool id.
Example:
-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
Depends on series http://patches.dpdk.org/project/dpdk/list/?series=5004
v4 Changes:
- Mark `otx2_parse_common_devargs` as __rte_internal.
v3 Changes:
- Split series into individual patches as targets are different.
v2 Changes:
- Fix formatting in doc(Andrzej).
- Add error returns for all failures(Andrzej).
- Fix devargs parameter list(Andrzej).
doc/guides/eventdevs/octeontx2.rst | 10 +++
doc/guides/mempool/octeontx2.rst | 10 +++
doc/guides/nics/octeontx2.rst | 12 +++
drivers/common/octeontx2/Makefile | 2 +-
drivers/common/octeontx2/meson.build | 2 +-
drivers/common/octeontx2/otx2_common.c | 34 +++++++++
drivers/common/octeontx2/otx2_common.h | 5 ++
.../rte_common_octeontx2_version.map | 13 ++++
drivers/event/octeontx2/otx2_evdev.c | 5 +-
drivers/mempool/octeontx2/otx2_mempool.c | 4 +-
drivers/mempool/octeontx2/otx2_mempool_ops.c | 74 +++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 4 +-
12 files changed, 169 insertions(+), 6 deletions(-)
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index d4b2515ce..6502f6415 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -148,6 +148,16 @@ Runtime Config Options
-w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:0e:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 2c9a0953b..49b45a04e 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -61,6 +61,16 @@ Runtime Config Options
provide ``max_pools`` parameter to the first PCIe device probed by the given
application.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 60187ec72..c2d87c9d0 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -194,6 +194,7 @@ Runtime Config Options
Setting this flag to 1 to select the legacy mode.
For example to select the legacy mode(RSS tag adder as XOR)::
+
-w 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -202,6 +203,7 @@ Runtime Config Options
``ipsec_in_max_spi`` ``devargs`` parameter.
For example::
+
-w 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
@@ -213,6 +215,16 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Limitations
-----------
diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile
index efe3da2cc..260da8dd3 100644
--- a/drivers/common/octeontx2/Makefile
+++ b/drivers/common/octeontx2/Makefile
@@ -34,6 +34,6 @@ SRCS-y += otx2_common.c
SRCS-y += otx2_sec_idev.c
LDLIBS += -lrte_eal
-LDLIBS += -lrte_ethdev
+LDLIBS += -lrte_ethdev -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
index 996ddba14..f2c04342e 100644
--- a/drivers/common/octeontx2/meson.build
+++ b/drivers/common/octeontx2/meson.build
@@ -21,6 +21,6 @@ foreach flag: extra_flags
endif
endforeach
-deps = ['eal', 'pci', 'ethdev']
+deps = ['eal', 'pci', 'ethdev', 'kvargs']
includes += include_directories('../../common/octeontx2',
'../../mempool/octeontx2', '../../bus/pci')
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
index 1a257cf07..5e7272f69 100644
--- a/drivers/common/octeontx2/otx2_common.c
+++ b/drivers/common/octeontx2/otx2_common.c
@@ -169,6 +169,40 @@ int otx2_npa_lf_obj_ref(void)
return cnt ? 0 : -EINVAL;
}
+static int
+parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint64_t val;
+
+ val = strtoull(value, NULL, 16);
+
+ *(uint64_t *)extra_args = val;
+
+ return 0;
+}
+
+/*
+ * @internal
+ * Parse common device arguments
+ */
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
+{
+
+ struct otx2_idev_cfg *idev;
+ uint64_t npa_lock_mask = 0;
+
+ idev = otx2_intra_dev_get_cfg();
+
+ if (idev == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
+ &parse_npa_lock_mask, &npa_lock_mask);
+
+ idev->npa_lock_mask = npa_lock_mask;
+}
+
/**
* @internal
*/
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
index e62cdea07..f0e98fbbc 100644
--- a/drivers/common/octeontx2/otx2_common.h
+++ b/drivers/common/octeontx2/otx2_common.h
@@ -8,6 +8,7 @@
#include <rte_atomic.h>
#include <rte_common.h>
#include <rte_cycles.h>
+#include <rte_kvargs.h>
#include <rte_memory.h>
#include <rte_memzone.h>
#include <rte_io.h>
@@ -49,6 +50,8 @@
(~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
#endif
+#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
+
/* Intra device related functions */
struct otx2_npa_lf;
struct otx2_idev_cfg {
@@ -60,6 +63,7 @@ struct otx2_idev_cfg {
rte_atomic16_t npa_refcnt;
uint16_t npa_refcnt_u16;
};
+ uint64_t npa_lock_mask;
};
struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
@@ -70,6 +74,7 @@ struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
int otx2_npa_lf_active(void *dev);
int otx2_npa_lf_obj_ref(void);
+void __rte_internal otx2_parse_common_devargs(struct rte_kvargs *kvlist);
/* Log */
extern int otx2_logtype_base;
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 8f2404bd9..74e418c82 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -45,8 +45,21 @@ DPDK_20.0.1 {
otx2_sec_idev_tx_cpt_qp_put;
} DPDK_20.0;
+DPDK_20.0.2 {
+ global:
+
+ otx2_parse_common_devargs;
+
+} DPDK_20.0;
+
EXPERIMENTAL {
global:
otx2_logtype_ep;
};
+
+INTERNAL {
+ global:
+
+ otx2_parse_common_devargs;
+};
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index d20213d78..630073de5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
&single_ws);
rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
-
+ otx2_parse_common_devargs(kvlist);
dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
@@ -1821,4 +1821,5 @@ RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
OTX2_SSO_SINGLE_WS "=1"
OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_SELFTEST "=1");
+ OTX2_SSO_SELFTEST "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index 3a4a9425f..fb630fecf 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs *devargs)
goto exit;
rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
exit:
return aura_sz;
@@ -452,4 +453,5 @@ RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>");
+ OTX2_MAX_POOLS "=<128-1048576>"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index 162b7f01d..ade9fa6d3 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -348,8 +348,13 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
struct npa_aq_enq_req *aura_init_req, *pool_init_req;
struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -ENOMEM;
+
aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
aura_init_req->aura_id = aura_id;
@@ -379,6 +384,44 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
return 0;
else
return NPA_LF_ERR_AURA_POOL_INIT;
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_init_req->aura_id = aura_id;
+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return -ENOMEM;
+ }
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ pool_init_req->aura_id = aura_id;
+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to lock POOL ctx to NDC");
+ return -ENOMEM;
+ }
+
+ return 0;
}
static int
@@ -390,8 +433,13 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
struct ndc_sync_op *ndc_req;
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -EINVAL;
+
/* Procedure for disabling an aura/pool */
rte_delay_us(10);
npa_lf_aura_op_alloc(aura_handle, 0);
@@ -434,6 +482,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
return NPA_LF_ERR_AURA_POOL_FINI;
}
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_req->aura_id = aura_id;
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock AURA ctx to NDC");
+ return -EINVAL;
+ }
+
+ pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ pool_req->aura_id = aura_id;
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock POOL ctx to NDC");
+ return -EINVAL;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index f29f01564..5390eb217 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -161,6 +161,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
null_devargs:
@@ -186,4 +187,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa>"
- OTX2_RSS_TAG_AS_XOR "=1");
+ OTX2_RSS_TAG_AS_XOR "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v4] mempool/octeontx2: add devargs to lock ctx in cache
2020-04-22 8:06 ` [dpdk-dev] [PATCH v4] " pbhagavatula
@ 2020-05-01 10:21 ` Pavan Nikhilesh Bhagavatula
2020-05-04 22:43 ` Thomas Monjalon
2020-05-11 10:07 ` [dpdk-dev] [PATCH v5] " pbhagavatula
1 sibling, 1 reply; 28+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-05-01 10:21 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran, thomas,
John McNamara, Marko Kovacevic, Nithin Kumar Dabilpuram,
Kiran Kumar Kokkilagadda
Cc: Andrzej Ostruszka [C], dev
>Subject: [dpdk-dev] [PATCH v4] mempool/octeontx2: add devargs to
>lock ctx in cache
>
>From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
>Add device arguments to lock NPA aura and pool contexts in NDC cache.
>The device args take hexadecimal bitmask where each bit represent the
>corresponding aura/pool id.
>Example:
> -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>
>Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>Acked-by: Jerin Jacob <jerinj@marvell.com>
>---
>
>Depends on series
>http://patches.dpdk.org/project/dpdk/list/?series=5004
>
> v4 Changes:
> - Mark `otx2_parse_common_devargs` as __rte_internal.
Ping @thomas
> v3 Changes:
> - Split series into individual patches as targets are different.
> v2 Changes:
> - Fix formatting in doc(Andrzej).
> - Add error returns for all failures(Andrzej).
> - Fix devargs parameter list(Andrzej).
>
> doc/guides/eventdevs/octeontx2.rst | 10 +++
> doc/guides/mempool/octeontx2.rst | 10 +++
> doc/guides/nics/octeontx2.rst | 12 +++
> drivers/common/octeontx2/Makefile | 2 +-
> drivers/common/octeontx2/meson.build | 2 +-
> drivers/common/octeontx2/otx2_common.c | 34 +++++++++
> drivers/common/octeontx2/otx2_common.h | 5 ++
> .../rte_common_octeontx2_version.map | 13 ++++
> drivers/event/octeontx2/otx2_evdev.c | 5 +-
> drivers/mempool/octeontx2/otx2_mempool.c | 4 +-
> drivers/mempool/octeontx2/otx2_mempool_ops.c | 74
>+++++++++++++++++++
> drivers/net/octeontx2/otx2_ethdev_devargs.c | 4 +-
> 12 files changed, 169 insertions(+), 6 deletions(-)
>
>diff --git a/doc/guides/eventdevs/octeontx2.rst
>b/doc/guides/eventdevs/octeontx2.rst
>index d4b2515ce..6502f6415 100644
>--- a/doc/guides/eventdevs/octeontx2.rst
>+++ b/doc/guides/eventdevs/octeontx2.rst
>@@ -148,6 +148,16 @@ Runtime Config Options
>
> -w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
>
>+- ``Lock NPA contexts in NDC``
>+
>+ Lock NPA aura and pool contexts in NDC cache.
>+ The device args take hexadecimal bitmask where each bit represent
>the
>+ corresponding aura/pool id.
>+
>+ For example::
>+
>+ -w 0002:0e:00.0,npa_lock_mask=0xf
>+
> Debugging Options
> ~~~~~~~~~~~~~~~~~
>
>diff --git a/doc/guides/mempool/octeontx2.rst
>b/doc/guides/mempool/octeontx2.rst
>index 2c9a0953b..49b45a04e 100644
>--- a/doc/guides/mempool/octeontx2.rst
>+++ b/doc/guides/mempool/octeontx2.rst
>@@ -61,6 +61,16 @@ Runtime Config Options
> provide ``max_pools`` parameter to the first PCIe device probed by
>the given
> application.
>
>+- ``Lock NPA contexts in NDC``
>+
>+ Lock NPA aura and pool contexts in NDC cache.
>+ The device args take hexadecimal bitmask where each bit represent
>the
>+ corresponding aura/pool id.
>+
>+ For example::
>+
>+ -w 0002:02:00.0,npa_lock_mask=0xf
>+
> Debugging Options
> ~~~~~~~~~~~~~~~~~
>
>diff --git a/doc/guides/nics/octeontx2.rst
>b/doc/guides/nics/octeontx2.rst
>index 60187ec72..c2d87c9d0 100644
>--- a/doc/guides/nics/octeontx2.rst
>+++ b/doc/guides/nics/octeontx2.rst
>@@ -194,6 +194,7 @@ Runtime Config Options
> Setting this flag to 1 to select the legacy mode.
>
> For example to select the legacy mode(RSS tag adder as XOR)::
>+
> -w 0002:02:00.0,tag_as_xor=1
>
> - ``Max SPI for inbound inline IPsec`` (default ``1``)
>@@ -202,6 +203,7 @@ Runtime Config Options
> ``ipsec_in_max_spi`` ``devargs`` parameter.
>
> For example::
>+
> -w 0002:02:00.0,ipsec_in_max_spi=128
>
> With the above configuration, application can enable inline IPsec
>processing
>@@ -213,6 +215,16 @@ Runtime Config Options
> parameters to all the PCIe devices if application requires to configure
>on
> all the ethdev ports.
>
>+- ``Lock NPA contexts in NDC``
>+
>+ Lock NPA aura and pool contexts in NDC cache.
>+ The device args take hexadecimal bitmask where each bit represent
>the
>+ corresponding aura/pool id.
>+
>+ For example::
>+
>+ -w 0002:02:00.0,npa_lock_mask=0xf
>+
> Limitations
> -----------
>
>diff --git a/drivers/common/octeontx2/Makefile
>b/drivers/common/octeontx2/Makefile
>index efe3da2cc..260da8dd3 100644
>--- a/drivers/common/octeontx2/Makefile
>+++ b/drivers/common/octeontx2/Makefile
>@@ -34,6 +34,6 @@ SRCS-y += otx2_common.c
> SRCS-y += otx2_sec_idev.c
>
> LDLIBS += -lrte_eal
>-LDLIBS += -lrte_ethdev
>+LDLIBS += -lrte_ethdev -lrte_kvargs
>
> include $(RTE_SDK)/mk/rte.lib.mk
>diff --git a/drivers/common/octeontx2/meson.build
>b/drivers/common/octeontx2/meson.build
>index 996ddba14..f2c04342e 100644
>--- a/drivers/common/octeontx2/meson.build
>+++ b/drivers/common/octeontx2/meson.build
>@@ -21,6 +21,6 @@ foreach flag: extra_flags
> endif
> endforeach
>
>-deps = ['eal', 'pci', 'ethdev']
>+deps = ['eal', 'pci', 'ethdev', 'kvargs']
> includes += include_directories('../../common/octeontx2',
> '../../mempool/octeontx2', '../../bus/pci')
>diff --git a/drivers/common/octeontx2/otx2_common.c
>b/drivers/common/octeontx2/otx2_common.c
>index 1a257cf07..5e7272f69 100644
>--- a/drivers/common/octeontx2/otx2_common.c
>+++ b/drivers/common/octeontx2/otx2_common.c
>@@ -169,6 +169,40 @@ int otx2_npa_lf_obj_ref(void)
> return cnt ? 0 : -EINVAL;
> }
>
>+static int
>+parse_npa_lock_mask(const char *key, const char *value, void
>*extra_args)
>+{
>+ RTE_SET_USED(key);
>+ uint64_t val;
>+
>+ val = strtoull(value, NULL, 16);
>+
>+ *(uint64_t *)extra_args = val;
>+
>+ return 0;
>+}
>+
>+/*
>+ * @internal
>+ * Parse common device arguments
>+ */
>+void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
>+{
>+
>+ struct otx2_idev_cfg *idev;
>+ uint64_t npa_lock_mask = 0;
>+
>+ idev = otx2_intra_dev_get_cfg();
>+
>+ if (idev == NULL)
>+ return;
>+
>+ rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
>+ &parse_npa_lock_mask, &npa_lock_mask);
>+
>+ idev->npa_lock_mask = npa_lock_mask;
>+}
>+
> /**
> * @internal
> */
>diff --git a/drivers/common/octeontx2/otx2_common.h
>b/drivers/common/octeontx2/otx2_common.h
>index e62cdea07..f0e98fbbc 100644
>--- a/drivers/common/octeontx2/otx2_common.h
>+++ b/drivers/common/octeontx2/otx2_common.h
>@@ -8,6 +8,7 @@
> #include <rte_atomic.h>
> #include <rte_common.h>
> #include <rte_cycles.h>
>+#include <rte_kvargs.h>
> #include <rte_memory.h>
> #include <rte_memzone.h>
> #include <rte_io.h>
>@@ -49,6 +50,8 @@
> (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
> #endif
>
>+#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
>+
> /* Intra device related functions */
> struct otx2_npa_lf;
> struct otx2_idev_cfg {
>@@ -60,6 +63,7 @@ struct otx2_idev_cfg {
> rte_atomic16_t npa_refcnt;
> uint16_t npa_refcnt_u16;
> };
>+ uint64_t npa_lock_mask;
> };
>
> struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
>@@ -70,6 +74,7 @@ struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
> void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
> int otx2_npa_lf_active(void *dev);
> int otx2_npa_lf_obj_ref(void);
>+void __rte_internal otx2_parse_common_devargs(struct rte_kvargs
>*kvlist);
>
> /* Log */
> extern int otx2_logtype_base;
>diff --git
>a/drivers/common/octeontx2/rte_common_octeontx2_version.map
>b/drivers/common/octeontx2/rte_common_octeontx2_version.map
>index 8f2404bd9..74e418c82 100644
>---
>a/drivers/common/octeontx2/rte_common_octeontx2_version.map
>+++
>b/drivers/common/octeontx2/rte_common_octeontx2_version.map
>@@ -45,8 +45,21 @@ DPDK_20.0.1 {
> otx2_sec_idev_tx_cpt_qp_put;
> } DPDK_20.0;
>
>+DPDK_20.0.2 {
>+ global:
>+
>+ otx2_parse_common_devargs;
>+
>+} DPDK_20.0;
>+
> EXPERIMENTAL {
> global:
>
> otx2_logtype_ep;
> };
>+
>+INTERNAL {
>+ global:
>+
>+ otx2_parse_common_devargs;
>+};
>diff --git a/drivers/event/octeontx2/otx2_evdev.c
>b/drivers/event/octeontx2/otx2_evdev.c
>index d20213d78..630073de5 100644
>--- a/drivers/event/octeontx2/otx2_evdev.c
>+++ b/drivers/event/octeontx2/otx2_evdev.c
>@@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev
>*dev, struct rte_devargs *devargs)
> &single_ws);
> rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS,
>&parse_sso_kvargs_dict,
> dev);
>-
>+ otx2_parse_common_devargs(kvlist);
> dev->dual_ws = !single_ws;
> rte_kvargs_free(kvlist);
> }
>@@ -1821,4 +1821,5 @@
>RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
> RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2,
>OTX2_SSO_XAE_CNT "=<int>"
> OTX2_SSO_SINGLE_WS "=1"
> OTX2_SSO_GGRP_QOS "=<string>"
>- OTX2_SSO_SELFTEST "=1");
>+ OTX2_SSO_SELFTEST "=1"
>+ OTX2_NPA_LOCK_MASK "=<1-65535>");
>diff --git a/drivers/mempool/octeontx2/otx2_mempool.c
>b/drivers/mempool/octeontx2/otx2_mempool.c
>index 3a4a9425f..fb630fecf 100644
>--- a/drivers/mempool/octeontx2/otx2_mempool.c
>+++ b/drivers/mempool/octeontx2/otx2_mempool.c
>@@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs
>*devargs)
> goto exit;
>
> rte_kvargs_process(kvlist, OTX2_MAX_POOLS,
>&parse_max_pools, &aura_sz);
>+ otx2_parse_common_devargs(kvlist);
> rte_kvargs_free(kvlist);
> exit:
> return aura_sz;
>@@ -452,4 +453,5 @@ RTE_PMD_REGISTER_PCI(mempool_octeontx2,
>pci_npa);
> RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
> RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
> RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
>- OTX2_MAX_POOLS "=<128-1048576>");
>+ OTX2_MAX_POOLS "=<128-1048576>"
>+ OTX2_NPA_LOCK_MASK "=<1-65535>");
>diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c
>b/drivers/mempool/octeontx2/otx2_mempool_ops.c
>index 162b7f01d..ade9fa6d3 100644
>--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
>+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
>@@ -348,8 +348,13 @@ npa_lf_aura_pool_init(struct otx2_mbox
>*mbox, uint32_t aura_id,
> struct npa_aq_enq_req *aura_init_req, *pool_init_req;
> struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
> struct otx2_mbox_dev *mdev = &mbox->dev[0];
>+ struct otx2_idev_cfg *idev;
> int rc, off;
>
>+ idev = otx2_intra_dev_get_cfg();
>+ if (idev == NULL)
>+ return -ENOMEM;
>+
> aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>
> aura_init_req->aura_id = aura_id;
>@@ -379,6 +384,44 @@ npa_lf_aura_pool_init(struct otx2_mbox
>*mbox, uint32_t aura_id,
> return 0;
> else
> return NPA_LF_ERR_AURA_POOL_INIT;
>+
>+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
>+ return 0;
>+
>+ aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>+ aura_init_req->aura_id = aura_id;
>+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
>+ aura_init_req->op = NPA_AQ_INSTOP_LOCK;
>+
>+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>+ if (!pool_init_req) {
>+ /* The shared memory buffer can be full.
>+ * Flush it and retry
>+ */
>+ otx2_mbox_msg_send(mbox, 0);
>+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
>+ if (rc < 0) {
>+ otx2_err("Failed to LOCK AURA context");
>+ return -ENOMEM;
>+ }
>+
>+ pool_init_req =
>otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>+ if (!pool_init_req) {
>+ otx2_err("Failed to LOCK POOL context");
>+ return -ENOMEM;
>+ }
>+ }
>+ pool_init_req->aura_id = aura_id;
>+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
>+ pool_init_req->op = NPA_AQ_INSTOP_LOCK;
>+
>+ rc = otx2_mbox_process(mbox);
>+ if (rc < 0) {
>+ otx2_err("Failed to lock POOL ctx to NDC");
>+ return -ENOMEM;
>+ }
>+
>+ return 0;
> }
>
> static int
>@@ -390,8 +433,13 @@ npa_lf_aura_pool_fini(struct otx2_mbox
>*mbox,
> struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
> struct otx2_mbox_dev *mdev = &mbox->dev[0];
> struct ndc_sync_op *ndc_req;
>+ struct otx2_idev_cfg *idev;
> int rc, off;
>
>+ idev = otx2_intra_dev_get_cfg();
>+ if (idev == NULL)
>+ return -EINVAL;
>+
> /* Procedure for disabling an aura/pool */
> rte_delay_us(10);
> npa_lf_aura_op_alloc(aura_handle, 0);
>@@ -434,6 +482,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox
>*mbox,
> otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
> return NPA_LF_ERR_AURA_POOL_FINI;
> }
>+
>+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
>+ return 0;
>+
>+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>+ aura_req->aura_id = aura_id;
>+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
>+ aura_req->op = NPA_AQ_INSTOP_UNLOCK;
>+
>+ rc = otx2_mbox_process(mbox);
>+ if (rc < 0) {
>+ otx2_err("Failed to unlock AURA ctx to NDC");
>+ return -EINVAL;
>+ }
>+
>+ pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
>+ pool_req->aura_id = aura_id;
>+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
>+ pool_req->op = NPA_AQ_INSTOP_UNLOCK;
>+
>+ rc = otx2_mbox_process(mbox);
>+ if (rc < 0) {
>+ otx2_err("Failed to unlock POOL ctx to NDC");
>+ return -EINVAL;
>+ }
>+
> return 0;
> }
>
>diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c
>b/drivers/net/octeontx2/otx2_ethdev_devargs.c
>index f29f01564..5390eb217 100644
>--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
>+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
>@@ -161,6 +161,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs
>*devargs, struct otx2_eth_dev *dev)
> &parse_switch_header_type,
>&switch_header_type);
> rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
> &parse_flag, &rss_tag_as_xor);
>+ otx2_parse_common_devargs(kvlist);
> rte_kvargs_free(kvlist);
>
> null_devargs:
>@@ -186,4 +187,5 @@
>RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
> OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
> OTX2_FLOW_MAX_PRIORITY "=<1-32>"
> OTX2_SWITCH_HEADER_TYPE
>"=<higig2|dsa>"
>- OTX2_RSS_TAG_AS_XOR "=1");
>+ OTX2_RSS_TAG_AS_XOR "=1"
>+ OTX2_NPA_LOCK_MASK "=<1-65535>");
>--
>2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v4] mempool/octeontx2: add devargs to lock ctx in cache
2020-05-01 10:21 ` Pavan Nikhilesh Bhagavatula
@ 2020-05-04 22:43 ` Thomas Monjalon
2020-05-10 22:35 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 28+ messages in thread
From: Thomas Monjalon @ 2020-05-04 22:43 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula
Cc: Jerin Jacob Kollanukkaran, John McNamara, Marko Kovacevic,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda, dev,
Andrzej Ostruszka [C]
01/05/2020 12:21, Pavan Nikhilesh Bhagavatula:
> >Subject: [dpdk-dev] [PATCH v4] mempool/octeontx2: add devargs to
> >lock ctx in cache
> >
> >From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> >Add device arguments to lock NPA aura and pool contexts in NDC cache.
> >The device args take hexadecimal bitmask where each bit represent the
> >corresponding aura/pool id.
> >Example:
> > -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
> >
> >Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >Acked-by: Jerin Jacob <jerinj@marvell.com>
> >---
> >
> >Depends on series
> >http://patches.dpdk.org/project/dpdk/list/?series=5004
> >
> > v4 Changes:
> > - Mark `otx2_parse_common_devargs` as __rte_internal.
>
> Ping @thomas
Now that __rte_internal marking was merged,
this patch is candidate for -rc2, but...
> >a/drivers/common/octeontx2/rte_common_octeontx2_version.map
> >b/drivers/common/octeontx2/rte_common_octeontx2_version.map
> >@@ -45,8 +45,21 @@ DPDK_20.0.1 {
> > otx2_sec_idev_tx_cpt_qp_put;
> > } DPDK_20.0;
> >
> >+DPDK_20.0.2 {
> >+ global:
> >+
> >+ otx2_parse_common_devargs;
> >+
> >+} DPDK_20.0;
Why are you adding the symbol both in 20.0.2 and INTERNAL below?
Also, that's a pity you did not take time to convert all the symbols
of this internal library to __rte_internal.
> >+
> > EXPERIMENTAL {
> > global:
> >
> > otx2_logtype_ep;
> > };
> >+
> >+INTERNAL {
> >+ global:
> >+
> >+ otx2_parse_common_devargs;
> >+};
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v4] mempool/octeontx2: add devargs to lock ctx in cache
2020-05-04 22:43 ` Thomas Monjalon
@ 2020-05-10 22:35 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 28+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-05-10 22:35 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jerin Jacob Kollanukkaran, John McNamara, Marko Kovacevic,
Nithin Kumar Dabilpuram, Kiran Kumar Kokkilagadda, dev,
Andrzej Ostruszka [C]
>01/05/2020 12:21, Pavan Nikhilesh Bhagavatula:
>> >Subject: [dpdk-dev] [PATCH v4] mempool/octeontx2: add devargs
>to
>> >lock ctx in cache
>> >
>> >From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> >
>> >Add device arguments to lock NPA aura and pool contexts in NDC
>cache.
>> >The device args take hexadecimal bitmask where each bit represent
>the
>> >corresponding aura/pool id.
>> >Example:
>> > -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>> >
>> >Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> >Acked-by: Jerin Jacob <jerinj@marvell.com>
>> >---
>> >
>> >Depends on series
>> >https://urldefense.proofpoint.com/v2/url?u=http-
>3A__patches.dpdk.org_project_dpdk_list_-3Fseries-
>3D5004&d=DwICAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=E3SgYMjtKCMVs
>B-fmvgGV3o-
>g_fjLhk5Pupi9ijohpc&m=WIve81BfP51j5YxMwFobYJ6Fa5_lzESSAdznqyR
>I8WQ&s=UQvvNHvQvpzcZJfEl3jp_pvsp7wM6RtKQrBch6EHVjg&e=
>> >
>> > v4 Changes:
>> > - Mark `otx2_parse_common_devargs` as __rte_internal.
>>
>> Ping @thomas
>
>Now that __rte_internal marking was merged,
>this patch is candidate for -rc2, but...
>
>
>>
>>a/drivers/common/octeontx2/rte_common_octeontx2_version.map
>>
>>b/drivers/common/octeontx2/rte_common_octeontx2_version.map
>> >@@ -45,8 +45,21 @@ DPDK_20.0.1 {
>> > otx2_sec_idev_tx_cpt_qp_put;
>> > } DPDK_20.0;
>> >
>> >+DPDK_20.0.2 {
>> >+ global:
>> >+
>> >+ otx2_parse_common_devargs;
>> >+
>> >+} DPDK_20.0;
>
>Why are you adding the symbol both in 20.0.2 and INTERNAL below?
>
>
>Also, that's a pity you did not take time to convert all the symbols
>of this internal library to __rte_internal.
>
My bad will send v5.
>
>> >+
>> > EXPERIMENTAL {
>> > global:
>> >
>> > otx2_logtype_ep;
>> > };
>> >+
>> >+INTERNAL {
>> >+ global:
>> >+
>> >+ otx2_parse_common_devargs;
>> >+};
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v5] mempool/octeontx2: add devargs to lock ctx in cache
2020-04-22 8:06 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-05-01 10:21 ` Pavan Nikhilesh Bhagavatula
@ 2020-05-11 10:07 ` pbhagavatula
2020-05-19 16:15 ` Thomas Monjalon
1 sibling, 1 reply; 28+ messages in thread
From: pbhagavatula @ 2020-05-11 10:07 UTC (permalink / raw)
To: jerinj, Pavan Nikhilesh, John McNamara, Marko Kovacevic,
Nithin Dabilpuram, Kiran Kumar K, Ray Kinsella, Neil Horman
Cc: thomas, dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock NPA aura and pool contexts in NDC cache.
The device args take hexadecimal bitmask where each bit represent the
corresponding aura/pool id.
Example:
-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
Depends on series http://patches.dpdk.org/project/dpdk/list/?series=9972
v5 Changes:
- Rebase on ToT.
v4 Changes:
- Mark `otx2_parse_common_devargs` as __rte_internal.
v3 Changes:
- Split series into individual patches as targets are different.
v2 Changes:
- Fix formatting in doc(Andrzej).
- Add error returns for all failures(Andrzej).
- Fix devargs parameter list(Andrzej).
doc/guides/eventdevs/octeontx2.rst | 10 +++
doc/guides/mempool/octeontx2.rst | 10 +++
doc/guides/nics/octeontx2.rst | 12 +++
drivers/common/octeontx2/Makefile | 2 +-
drivers/common/octeontx2/meson.build | 2 +-
drivers/common/octeontx2/otx2_common.c | 34 +++++++++
drivers/common/octeontx2/otx2_common.h | 6 ++
.../rte_common_octeontx2_version.map | 1 +
drivers/event/octeontx2/otx2_evdev.c | 5 +-
drivers/mempool/octeontx2/otx2_mempool.c | 4 +-
drivers/mempool/octeontx2/otx2_mempool_ops.c | 74 +++++++++++++++++++
drivers/net/octeontx2/otx2_ethdev_devargs.c | 4 +-
12 files changed, 158 insertions(+), 6 deletions(-)
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index d4b2515ce..6502f6415 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -148,6 +148,16 @@ Runtime Config Options
-w 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:0e:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
index 2c9a0953b..49b45a04e 100644
--- a/doc/guides/mempool/octeontx2.rst
+++ b/doc/guides/mempool/octeontx2.rst
@@ -61,6 +61,16 @@ Runtime Config Options
provide ``max_pools`` parameter to the first PCIe device probed by the given
application.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
Debugging Options
~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 85d378f67..24089ce67 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -195,6 +195,7 @@ Runtime Config Options
Setting this flag to 1 to select the legacy mode.
For example to select the legacy mode(RSS tag adder as XOR)::
+
-w 0002:02:00.0,tag_as_xor=1
- ``Max SPI for inbound inline IPsec`` (default ``1``)
@@ -203,6 +204,7 @@ Runtime Config Options
``ipsec_in_max_spi`` ``devargs`` parameter.
For example::
+
-w 0002:02:00.0,ipsec_in_max_spi=128
With the above configuration, application can enable inline IPsec processing
@@ -214,6 +216,16 @@ Runtime Config Options
parameters to all the PCIe devices if application requires to configure on
all the ethdev ports.
+- ``Lock NPA contexts in NDC``
+
+ Lock NPA aura and pool contexts in NDC cache.
+ The device args take hexadecimal bitmask where each bit represent the
+ corresponding aura/pool id.
+
+ For example::
+
+ -w 0002:02:00.0,npa_lock_mask=0xf
+
.. _otx2_tmapi:
Traffic Management API
diff --git a/drivers/common/octeontx2/Makefile b/drivers/common/octeontx2/Makefile
index efe3da2cc..260da8dd3 100644
--- a/drivers/common/octeontx2/Makefile
+++ b/drivers/common/octeontx2/Makefile
@@ -34,6 +34,6 @@ SRCS-y += otx2_common.c
SRCS-y += otx2_sec_idev.c
LDLIBS += -lrte_eal
-LDLIBS += -lrte_ethdev
+LDLIBS += -lrte_ethdev -lrte_kvargs
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
index 996ddba14..f2c04342e 100644
--- a/drivers/common/octeontx2/meson.build
+++ b/drivers/common/octeontx2/meson.build
@@ -21,6 +21,6 @@ foreach flag: extra_flags
endif
endforeach
-deps = ['eal', 'pci', 'ethdev']
+deps = ['eal', 'pci', 'ethdev', 'kvargs']
includes += include_directories('../../common/octeontx2',
'../../mempool/octeontx2', '../../bus/pci')
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
index 1a257cf07..5e7272f69 100644
--- a/drivers/common/octeontx2/otx2_common.c
+++ b/drivers/common/octeontx2/otx2_common.c
@@ -169,6 +169,40 @@ int otx2_npa_lf_obj_ref(void)
return cnt ? 0 : -EINVAL;
}
+static int
+parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
+{
+ RTE_SET_USED(key);
+ uint64_t val;
+
+ val = strtoull(value, NULL, 16);
+
+ *(uint64_t *)extra_args = val;
+
+ return 0;
+}
+
+/*
+ * @internal
+ * Parse common device arguments
+ */
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
+{
+
+ struct otx2_idev_cfg *idev;
+ uint64_t npa_lock_mask = 0;
+
+ idev = otx2_intra_dev_get_cfg();
+
+ if (idev == NULL)
+ return;
+
+ rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
+ &parse_npa_lock_mask, &npa_lock_mask);
+
+ idev->npa_lock_mask = npa_lock_mask;
+}
+
/**
* @internal
*/
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
index 174702687..2168cde4d 100644
--- a/drivers/common/octeontx2/otx2_common.h
+++ b/drivers/common/octeontx2/otx2_common.h
@@ -8,6 +8,7 @@
#include <rte_atomic.h>
#include <rte_common.h>
#include <rte_cycles.h>
+#include <rte_kvargs.h>
#include <rte_memory.h>
#include <rte_memzone.h>
#include <rte_io.h>
@@ -49,6 +50,8 @@
(~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
#endif
+#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
+
/* Intra device related functions */
struct otx2_npa_lf;
struct otx2_idev_cfg {
@@ -60,6 +63,7 @@ struct otx2_idev_cfg {
rte_atomic16_t npa_refcnt;
uint16_t npa_refcnt_u16;
};
+ uint64_t npa_lock_mask;
};
__rte_internal
@@ -78,6 +82,8 @@ __rte_internal
int otx2_npa_lf_active(void *dev);
__rte_internal
int otx2_npa_lf_obj_ref(void);
+__rte_internal
+void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
/* Log */
extern int otx2_logtype_base;
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 94af2ed69..c8fb2f4e4 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -37,6 +37,7 @@ INTERNAL {
otx2_sec_idev_tx_cpt_qp_get;
otx2_sec_idev_tx_cpt_qp_put;
otx2_logtype_ep;
+ otx2_parse_common_devargs;
local: *;
};
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
index d20213d78..630073de5 100644
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ b/drivers/event/octeontx2/otx2_evdev.c
@@ -1659,7 +1659,7 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
&single_ws);
rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
dev);
-
+ otx2_parse_common_devargs(kvlist);
dev->dual_ws = !single_ws;
rte_kvargs_free(kvlist);
}
@@ -1821,4 +1821,5 @@ RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
OTX2_SSO_SINGLE_WS "=1"
OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_SELFTEST "=1");
+ OTX2_SSO_SELFTEST "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index 3a4a9425f..fb630fecf 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -191,6 +191,7 @@ otx2_parse_aura_size(struct rte_devargs *devargs)
goto exit;
rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
exit:
return aura_sz;
@@ -452,4 +453,5 @@ RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>");
+ OTX2_MAX_POOLS "=<128-1048576>"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index 162b7f01d..ade9fa6d3 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -348,8 +348,13 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
struct npa_aq_enq_req *aura_init_req, *pool_init_req;
struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -ENOMEM;
+
aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
aura_init_req->aura_id = aura_id;
@@ -379,6 +384,44 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
return 0;
else
return NPA_LF_ERR_AURA_POOL_INIT;
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_init_req->aura_id = aura_id;
+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return -ENOMEM;
+ }
+
+ pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ if (!pool_init_req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ pool_init_req->aura_id = aura_id;
+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_init_req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to lock POOL ctx to NDC");
+ return -ENOMEM;
+ }
+
+ return 0;
}
static int
@@ -390,8 +433,13 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
struct otx2_mbox_dev *mdev = &mbox->dev[0];
struct ndc_sync_op *ndc_req;
+ struct otx2_idev_cfg *idev;
int rc, off;
+ idev = otx2_intra_dev_get_cfg();
+ if (idev == NULL)
+ return -EINVAL;
+
/* Procedure for disabling an aura/pool */
rte_delay_us(10);
npa_lf_aura_op_alloc(aura_handle, 0);
@@ -434,6 +482,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
return NPA_LF_ERR_AURA_POOL_FINI;
}
+
+ if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
+ return 0;
+
+ aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ aura_req->aura_id = aura_id;
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock AURA ctx to NDC");
+ return -EINVAL;
+ }
+
+ pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
+ pool_req->aura_id = aura_id;
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to unlock POOL ctx to NDC");
+ return -EINVAL;
+ }
+
return 0;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83868bc19..e8ddaa69f 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -163,6 +163,7 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
null_devargs:
@@ -188,4 +189,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b>"
- OTX2_RSS_TAG_AS_XOR "=1");
+ OTX2_RSS_TAG_AS_XOR "=1"
+ OTX2_NPA_LOCK_MASK "=<1-65535>");
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v5] mempool/octeontx2: add devargs to lock ctx in cache
2020-05-11 10:07 ` [dpdk-dev] [PATCH v5] " pbhagavatula
@ 2020-05-19 16:15 ` Thomas Monjalon
0 siblings, 0 replies; 28+ messages in thread
From: Thomas Monjalon @ 2020-05-19 16:15 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: jerinj, John McNamara, Marko Kovacevic, Nithin Dabilpuram,
Kiran Kumar K, Ray Kinsella, Neil Horman, dev
11/05/2020 12:07, pbhagavatula@marvell.com:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock NPA aura and pool contexts in NDC cache.
> The device args take hexadecimal bitmask where each bit represent the
> corresponding aura/pool id.
> Example:
> -w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
Applied, thanks
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v5] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-31 13:58 ` [dpdk-dev] [PATCH v5] " pbhagavatula
@ 2020-06-26 5:00 ` Jerin Jacob
2020-06-28 22:18 ` [dpdk-dev] [PATCH v6] " pbhagavatula
1 sibling, 0 replies; 28+ messages in thread
From: Jerin Jacob @ 2020-06-26 5:00 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Andrzej Ostruszka, Nithin Dabilpuram, Kiran Kumar K,
John McNamara, Marko Kovacevic, dpdk-dev
On Tue, Mar 31, 2020 at 7:29 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock Rx/Tx contexts.
> Application can either choose to lock Rx or Tx contexts by using
> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>
> Example:
> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
> ---
> Depends on http://patches.dpdk.org/patch/67178/
> v5 Changes:
> - Remove redundant goto.(Andrzej)
>
> v4 Changes:
> - Fix return path using unnecessary goto.(Andrzej)
> - Fix datatype of values passed to devargs parser.(Andrzej)
>
> v3 Changes:
> - Split series into individual patches as targets are different.
Could you please rebase to dpdk-next-net-mrvl master and send v6.
> doc/guides/nics/octeontx2.rst | 16 ++
> drivers/net/octeontx2/otx2_ethdev.c | 196 +++++++++++++++++++-
> drivers/net/octeontx2/otx2_ethdev.h | 2 +
> drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
> drivers/net/octeontx2/otx2_rss.c | 23 +++
> 5 files changed, 244 insertions(+), 9 deletions(-)
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v6] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-03-31 13:58 ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-06-26 5:00 ` Jerin Jacob
@ 2020-06-28 22:18 ` pbhagavatula
2020-07-02 9:46 ` Jerin Jacob
1 sibling, 1 reply; 28+ messages in thread
From: pbhagavatula @ 2020-06-28 22:18 UTC (permalink / raw)
To: jerinj, Nithin Dabilpuram, Kiran Kumar K, John McNamara, Marko Kovacevic
Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add device arguments to lock Rx/Tx contexts.
Application can either choose to lock Rx or Tx contexts by using
'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
Example:
-w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
---
v6 Changes:
- Rebase on next-net-mrvl.
v5 Changes:
- Remove redundant goto.(Andrzej)
v4 Changes:
- Fix return path using unnecessary goto.(Andrzej)
- Fix datatype of values passed to devargs parser.(Andrzej)
v3 Changes:
- Split series into individual patches as targets are different.
doc/guides/nics/octeontx2.rst | 16 ++
drivers/net/octeontx2/otx2_ethdev.c | 196 +++++++++++++++++++-
drivers/net/octeontx2/otx2_ethdev.h | 2 +
drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
| 23 +++
5 files changed, 244 insertions(+), 9 deletions(-)
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 24089ce67..bb591a8b7 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -210,6 +210,22 @@ Runtime Config Options
With the above configuration, application can enable inline IPsec processing
on 128 SAs (SPI 0-127).
+- ``Lock Rx contexts in NDC cache``
+
+ Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_rx_ctx=1
+
+- ``Lock Tx contexts in NDC cache``
+
+ Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
+
+ For example::
+
+ -w 0002:02:00.0,lock_tx_ctx=1
+
.. note::
Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 095506034..1c0fb0020 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -298,8 +298,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
NIX_CQ_ALIGN, dev->node);
if (rz == NULL) {
otx2_err("Failed to allocate mem for cq hw ring");
- rc = -ENOMEM;
- goto fail;
+ return -ENOMEM;
}
memset(rz->addr, 0, rz->len);
rxq->desc = (uintptr_t)rz->addr;
@@ -348,7 +347,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
rc = otx2_mbox_process(mbox);
if (rc) {
otx2_err("Failed to init cq context");
- goto fail;
+ return rc;
}
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
@@ -387,12 +386,44 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
rc = otx2_mbox_process(mbox);
if (rc) {
otx2_err("Failed to init rq context");
- goto fail;
+ return rc;
+ }
+
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to LOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_LOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK rq context");
+ return rc;
+ }
}
return 0;
-fail:
- return rc;
}
static int
@@ -439,6 +470,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
return rc;
}
+ if (dev->lock_rx_ctx) {
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK cq context");
+ return rc;
+ }
+
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!aq) {
+ otx2_err("Failed to UNLOCK rq context");
+ return -ENOMEM;
+ }
+ }
+ aq->qidx = rxq->rq;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK rq context");
+ return rc;
+ }
+ }
+
return 0;
}
@@ -724,6 +789,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
return flags;
}
+static int
+nix_sqb_lock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to LOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to LOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to lock POOL in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+nix_sqb_unlock(struct rte_mempool *mp)
+{
+ struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
+ struct npa_aq_enq_req *req;
+ int rc;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(npa_lf->mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
+ if (rc < 0) {
+ otx2_err("Failed to UNLOCK AURA context");
+ return rc;
+ }
+
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (!req) {
+ otx2_err("Failed to UNLOCK POOL context");
+ return -ENOMEM;
+ }
+ }
+ req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(npa_lf->mbox);
+ if (rc < 0) {
+ otx2_err("Unable to UNLOCK AURA in NDC");
+ return rc;
+ }
+
+ return 0;
+}
+
static int
nix_sq_init(struct otx2_eth_txq *txq)
{
@@ -766,7 +919,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
/* Many to one reduction */
sq->sq.qint_idx = txq->sq % dev->qints;
- return otx2_mbox_process(mbox);
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ if (dev->lock_tx_ctx) {
+ sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ sq->qidx = txq->sq;
+ sq->ctype = NIX_AQ_CTYPE_SQ;
+ sq->op = NIX_AQ_INSTOP_LOCK;
+
+ rc = otx2_mbox_process(mbox);
+ }
+
+ return rc;
}
static int
@@ -809,6 +975,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
if (rc)
return rc;
+ if (dev->lock_tx_ctx) {
+ /* Unlock sq */
+ aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = txq->sq;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_UNLOCK;
+
+ rc = otx2_mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ nix_sqb_unlock(txq->sqb_pool);
+ }
+
/* Read SQ and free sqb's */
aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
aq->qidx = txq->sq;
@@ -930,6 +1110,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
}
nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
+ if (dev->lock_tx_ctx)
+ nix_sqb_lock(txq->sqb_pool);
return 0;
fail:
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 0fbf68b8e..eb27ea200 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -273,6 +273,8 @@ struct otx2_eth_dev {
uint8_t max_mac_entries;
uint8_t lf_tx_stats;
uint8_t lf_rx_stats;
+ uint8_t lock_rx_ctx;
+ uint8_t lock_tx_ctx;
uint16_t flags;
uint16_t cints;
uint16_t qints;
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index e8ddaa69f..d4a85bf55 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -126,6 +126,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
#define OTX2_SWITCH_HEADER_TYPE "switch_header"
#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
+#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
+#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
int
otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
@@ -136,9 +138,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
uint16_t switch_header_type = 0;
uint16_t flow_max_priority = 3;
uint16_t ipsec_in_max_spi = 1;
- uint16_t scalar_enable = 0;
uint16_t rss_tag_as_xor = 0;
+ uint16_t scalar_enable = 0;
struct rte_kvargs *kvlist;
+ uint16_t lock_rx_ctx = 0;
+ uint16_t lock_tx_ctx = 0;
if (devargs == NULL)
goto null_devargs;
@@ -163,6 +167,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
&parse_switch_header_type, &switch_header_type);
rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
&parse_flag, &rss_tag_as_xor);
+ rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
+ &parse_flag, &lock_rx_ctx);
+ rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
+ &parse_flag, &lock_tx_ctx);
otx2_parse_common_devargs(kvlist);
rte_kvargs_free(kvlist);
@@ -171,6 +179,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
dev->scalar_ena = scalar_enable;
dev->rss_tag_as_xor = rss_tag_as_xor;
dev->max_sqb_count = sqb_count;
+ dev->lock_rx_ctx = lock_rx_ctx;
+ dev->lock_tx_ctx = lock_tx_ctx;
dev->rss_info.rss_size = rss_size;
dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
dev->npc_flow.flow_max_priority = flow_max_priority;
@@ -190,4 +200,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
OTX2_FLOW_MAX_PRIORITY "=<1-32>"
OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b>"
OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
+ OTX2_NPA_LOCK_MASK "=<1-65535>"
+ OTX2_LOCK_RX_CTX "=1"
+ OTX2_LOCK_TX_CTX "=1");
--git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 5e3f86681..d859937e6 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
req->qidx = (group * rss->rss_size) + idx;
req->ctype = NIX_AQ_CTYPE_RSS;
req->op = NIX_AQ_INSTOP_INIT;
+
+ if (!dev->lock_rx_ctx)
+ continue;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ otx2_mbox_msg_send(mbox, 0);
+ rc = otx2_mbox_wait_for_rsp(mbox, 0);
+ if (rc < 0)
+ return rc;
+
+ req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return -ENOMEM;
+ }
+ req->rss.rq = ind_tbl[idx];
+ /* Fill AQ info */
+ req->qidx = (group * rss->rss_size) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_LOCK;
}
otx2_mbox_msg_send(mbox, 0);
--
2.17.1
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/octeontx2: add devargs to lock Rx/Tx ctx
2020-06-28 22:18 ` [dpdk-dev] [PATCH v6] " pbhagavatula
@ 2020-07-02 9:46 ` Jerin Jacob
0 siblings, 0 replies; 28+ messages in thread
From: Jerin Jacob @ 2020-07-02 9:46 UTC (permalink / raw)
To: Pavan Nikhilesh, Ferruh Yigit
Cc: Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, John McNamara,
Marko Kovacevic, dpdk-dev
On Mon, Jun 29, 2020 at 3:48 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock Rx/Tx contexts.
> Application can either choose to lock Rx or Tx contexts by using
> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>
> Example:
> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Applied to dpdk-next-net-mrvl/master. Thanks
> ---
> v6 Changes:
> - Rebase on next-net-mrvl.
>
> v5 Changes:
> - Remove redundant goto.(Andrzej)
>
> v4 Changes:
> - Fix return path using unnecessary goto.(Andrzej)
> - Fix datatype of values passed to devargs parser.(Andrzej)
>
> v3 Changes:
> - Split series into individual patches as targets are different.
>
> doc/guides/nics/octeontx2.rst | 16 ++
> drivers/net/octeontx2/otx2_ethdev.c | 196 +++++++++++++++++++-
> drivers/net/octeontx2/otx2_ethdev.h | 2 +
> drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
> drivers/net/octeontx2/otx2_rss.c | 23 +++
> 5 files changed, 244 insertions(+), 9 deletions(-)
>
> diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
> index 24089ce67..bb591a8b7 100644
> --- a/doc/guides/nics/octeontx2.rst
> +++ b/doc/guides/nics/octeontx2.rst
> @@ -210,6 +210,22 @@ Runtime Config Options
> With the above configuration, application can enable inline IPsec processing
> on 128 SAs (SPI 0-127).
>
> +- ``Lock Rx contexts in NDC cache``
> +
> + Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
> +
> + For example::
> +
> + -w 0002:02:00.0,lock_rx_ctx=1
> +
> +- ``Lock Tx contexts in NDC cache``
> +
> + Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
> +
> + For example::
> +
> + -w 0002:02:00.0,lock_tx_ctx=1
> +
> .. note::
>
> Above devarg parameters are configurable per device, user needs to pass the
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index 095506034..1c0fb0020 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -298,8 +298,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> NIX_CQ_ALIGN, dev->node);
> if (rz == NULL) {
> otx2_err("Failed to allocate mem for cq hw ring");
> - rc = -ENOMEM;
> - goto fail;
> + return -ENOMEM;
> }
> memset(rz->addr, 0, rz->len);
> rxq->desc = (uintptr_t)rz->addr;
> @@ -348,7 +347,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> rc = otx2_mbox_process(mbox);
> if (rc) {
> otx2_err("Failed to init cq context");
> - goto fail;
> + return rc;
> }
>
> aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> @@ -387,12 +386,44 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> rc = otx2_mbox_process(mbox);
> if (rc) {
> otx2_err("Failed to init rq context");
> - goto fail;
> + return rc;
> + }
> +
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK cq context");
> + return rc;
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to LOCK rq context");
> + return -ENOMEM;
> + }
> + }
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK rq context");
> + return rc;
> + }
> }
>
> return 0;
> -fail:
> - return rc;
> }
>
> static int
> @@ -439,6 +470,40 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
> return rc;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = rxq->rq;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK cq context");
> + return rc;
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to UNLOCK rq context");
> + return -ENOMEM;
> + }
> + }
> + aq->qidx = rxq->rq;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK rq context");
> + return rc;
> + }
> + }
> +
> return 0;
> }
>
> @@ -724,6 +789,94 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
> return flags;
> }
>
> +static int
> +nix_sqb_lock(struct rte_mempool *mp)
> +{
> + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
> + struct npa_aq_enq_req *req;
> + int rc;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_AURA;
> + req->op = NPA_AQ_INSTOP_LOCK;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(npa_lf->mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK AURA context");
> + return rc;
> + }
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + otx2_err("Failed to LOCK POOL context");
> + return -ENOMEM;
> + }
> + }
> +
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_POOL;
> + req->op = NPA_AQ_INSTOP_LOCK;
> +
> + rc = otx2_mbox_process(npa_lf->mbox);
> + if (rc < 0) {
> + otx2_err("Unable to lock POOL in NDC");
> + return rc;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +nix_sqb_unlock(struct rte_mempool *mp)
> +{
> + struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
> + struct npa_aq_enq_req *req;
> + int rc;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_AURA;
> + req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(npa_lf->mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to UNLOCK AURA context");
> + return rc;
> + }
> +
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + if (!req) {
> + otx2_err("Failed to UNLOCK POOL context");
> + return -ENOMEM;
> + }
> + }
> + req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
> + req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
> + req->ctype = NPA_AQ_CTYPE_POOL;
> + req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> + rc = otx2_mbox_process(npa_lf->mbox);
> + if (rc < 0) {
> + otx2_err("Unable to UNLOCK AURA in NDC");
> + return rc;
> + }
> +
> + return 0;
> +}
> +
> static int
> nix_sq_init(struct otx2_eth_txq *txq)
> {
> @@ -766,7 +919,20 @@ nix_sq_init(struct otx2_eth_txq *txq)
> /* Many to one reduction */
> sq->sq.qint_idx = txq->sq % dev->qints;
>
> - return otx2_mbox_process(mbox);
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + return rc;
> +
> + if (dev->lock_tx_ctx) {
> + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + sq->qidx = txq->sq;
> + sq->ctype = NIX_AQ_CTYPE_SQ;
> + sq->op = NIX_AQ_INSTOP_LOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + }
> +
> + return rc;
> }
>
> static int
> @@ -809,6 +975,20 @@ nix_sq_uninit(struct otx2_eth_txq *txq)
> if (rc)
> return rc;
>
> + if (dev->lock_tx_ctx) {
> + /* Unlock sq */
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = txq->sq;
> + aq->ctype = NIX_AQ_CTYPE_SQ;
> + aq->op = NIX_AQ_INSTOP_UNLOCK;
> +
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0)
> + return rc;
> +
> + nix_sqb_unlock(txq->sqb_pool);
> + }
> +
> /* Read SQ and free sqb's */
> aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> aq->qidx = txq->sq;
> @@ -930,6 +1110,8 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
> }
>
> nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
> + if (dev->lock_tx_ctx)
> + nix_sqb_lock(txq->sqb_pool);
>
> return 0;
> fail:
> diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
> index 0fbf68b8e..eb27ea200 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.h
> +++ b/drivers/net/octeontx2/otx2_ethdev.h
> @@ -273,6 +273,8 @@ struct otx2_eth_dev {
> uint8_t max_mac_entries;
> uint8_t lf_tx_stats;
> uint8_t lf_rx_stats;
> + uint8_t lock_rx_ctx;
> + uint8_t lock_tx_ctx;
> uint16_t flags;
> uint16_t cints;
> uint16_t qints;
> diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
> index e8ddaa69f..d4a85bf55 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
> @@ -126,6 +126,8 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
> #define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
> #define OTX2_SWITCH_HEADER_TYPE "switch_header"
> #define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
> +#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
> +#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
>
> int
> otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> @@ -136,9 +138,11 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> uint16_t switch_header_type = 0;
> uint16_t flow_max_priority = 3;
> uint16_t ipsec_in_max_spi = 1;
> - uint16_t scalar_enable = 0;
> uint16_t rss_tag_as_xor = 0;
> + uint16_t scalar_enable = 0;
> struct rte_kvargs *kvlist;
> + uint16_t lock_rx_ctx = 0;
> + uint16_t lock_tx_ctx = 0;
>
> if (devargs == NULL)
> goto null_devargs;
> @@ -163,6 +167,10 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> &parse_switch_header_type, &switch_header_type);
> rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
> &parse_flag, &rss_tag_as_xor);
> + rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
> + &parse_flag, &lock_rx_ctx);
> + rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
> + &parse_flag, &lock_tx_ctx);
> otx2_parse_common_devargs(kvlist);
> rte_kvargs_free(kvlist);
>
> @@ -171,6 +179,8 @@ otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
> dev->scalar_ena = scalar_enable;
> dev->rss_tag_as_xor = rss_tag_as_xor;
> dev->max_sqb_count = sqb_count;
> + dev->lock_rx_ctx = lock_rx_ctx;
> + dev->lock_tx_ctx = lock_tx_ctx;
> dev->rss_info.rss_size = rss_size;
> dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
> dev->npc_flow.flow_max_priority = flow_max_priority;
> @@ -190,4 +200,6 @@ RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2,
> OTX2_FLOW_MAX_PRIORITY "=<1-32>"
> OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b>"
> OTX2_RSS_TAG_AS_XOR "=1"
> - OTX2_NPA_LOCK_MASK "=<1-65535>");
> + OTX2_NPA_LOCK_MASK "=<1-65535>"
> + OTX2_LOCK_RX_CTX "=1"
> + OTX2_LOCK_TX_CTX "=1");
> diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
> index 5e3f86681..d859937e6 100644
> --- a/drivers/net/octeontx2/otx2_rss.c
> +++ b/drivers/net/octeontx2/otx2_rss.c
> @@ -33,6 +33,29 @@ otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
> req->qidx = (group * rss->rss_size) + idx;
> req->ctype = NIX_AQ_CTYPE_RSS;
> req->op = NIX_AQ_INSTOP_INIT;
> +
> + if (!dev->lock_rx_ctx)
> + continue;
> +
> + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!req) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0)
> + return rc;
> +
> + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!req)
> + return -ENOMEM;
> + }
> + req->rss.rq = ind_tbl[idx];
> + /* Fill AQ info */
> + req->qidx = (group * rss->rss_size) + idx;
> + req->ctype = NIX_AQ_CTYPE_RSS;
> + req->op = NIX_AQ_INSTOP_LOCK;
> }
>
> otx2_mbox_msg_send(mbox, 0);
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2020-07-02 9:46 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-06 16:35 [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-19 9:36 ` Andrzej Ostruszka
2020-03-19 13:56 ` Pavan Nikhilesh Bhagavatula
2020-03-19 9:36 ` [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Andrzej Ostruszka
2020-03-19 13:35 ` Pavan Nikhilesh Bhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-25 6:51 ` [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Jerin Jacob
2020-03-26 6:33 ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-26 15:56 ` Andrzej Ostruszka [C]
2020-03-27 9:53 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-03-27 16:19 ` Andrzej Ostruszka
2020-03-27 17:49 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-03-31 13:58 ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-06-26 5:00 ` Jerin Jacob
2020-06-28 22:18 ` [dpdk-dev] [PATCH v6] " pbhagavatula
2020-07-02 9:46 ` Jerin Jacob
2020-03-26 6:34 ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-04-06 8:39 ` Jerin Jacob
2020-04-16 22:33 ` Thomas Monjalon
2020-04-21 7:37 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-04-22 8:06 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-05-01 10:21 ` Pavan Nikhilesh Bhagavatula
2020-05-04 22:43 ` Thomas Monjalon
2020-05-10 22:35 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-05-11 10:07 ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-05-19 16:15 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).