From: <pbhagavatula@marvell.com>
To: <jerinj@marvell.com>, John McNamara <john.mcnamara@intel.com>,
"Marko Kovacevic" <marko.kovacevic@intel.com>
Cc: <dev@dpdk.org>, Pavan Nikhilesh <pbhagavatula@marvell.com>
Subject: [dpdk-dev] [PATCH 7/7] doc: update OcteonTx limitations
Date: Sat, 16 Nov 2019 19:55:18 +0530 [thread overview]
Message-ID: <20191116142518.1500-8-pbhagavatula@marvell.com> (raw)
In-Reply-To: <20191116142518.1500-1-pbhagavatula@marvell.com>
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Update OcteonTx limitaion with max mempool size used.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
doc/guides/eventdevs/octeontx.rst | 7 +++++++
doc/guides/nics/octeontx.rst | 7 +++++++
2 files changed, 14 insertions(+)
diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst
index ab36a36e0..587b7a427 100644
--- a/doc/guides/eventdevs/octeontx.rst
+++ b/doc/guides/eventdevs/octeontx.rst
@@ -139,3 +139,10 @@ follows:
When timvf is used as Event timer adapter event schedule type
``RTE_SCHED_TYPE_PARALLEL`` is not supported.
+
+Max mempool size
+~~~~~~~~~~~~~~~~
+
+Max mempool size when using OcteonTx Eventdev (SSO) should be limited to 128K.
+When running dpdk-test-eventdev on OcteonTx the application can limit the
+number of mbufs by using the option ``--pool_sz 131072``
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index 3c19c912d..00098a3b2 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -174,3 +174,10 @@ The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
+
+Maximum mempool size
+~~~~~~~~~~~~~~~~~~~~
+
+The maximum mempool size supplied to Rx queue setup should be less than 128K.
+When running testpmd on OcteonTx the application can limit the number of mbufs
+by using the option ``--total-num-mbufs=131072``.
--
2.24.0
next prev parent reply other threads:[~2019-11-16 14:26 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-16 14:25 [dpdk-dev] [PATCH 0/7] octeontx: sync with latest SDK pbhagavatula
2019-11-16 14:25 ` [dpdk-dev] [PATCH 1/7] octeontx: update mbox definition to version 1.1.3 pbhagavatula
2019-11-16 14:25 ` [dpdk-dev] [PATCH 2/7] net/octeontx: add application domain validation pbhagavatula
2019-11-16 14:25 ` [dpdk-dev] [PATCH 3/7] net/octeontx: cleanup redudant mbox structs pbhagavatula
2019-11-16 14:25 ` [dpdk-dev] [PATCH 4/7] mempool/octeontx: add application domain validation pbhagavatula
2019-11-16 14:25 ` [dpdk-dev] [PATCH 5/7] event/octeontx: add appication " pbhagavatula
2019-11-16 14:25 ` [dpdk-dev] [PATCH 6/7] net/octeontx: make Rx queue offloads same as dev offloads pbhagavatula
2019-11-16 14:25 ` pbhagavatula [this message]
2019-11-19 2:43 ` [dpdk-dev] [PATCH 7/7] doc: update OcteonTx limitations Jerin Jacob
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191116142518.1500-8-pbhagavatula@marvell.com \
--to=pbhagavatula@marvell.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=john.mcnamara@intel.com \
--cc=marko.kovacevic@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).